id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,197,135
IAC-IMX8MP-Kit main functions and fields
IAC-IMX8MP-KIT IAC-IMX8MP-KIT Development Board, it adopts NXP IMX8MPlus series processor,...
0
2022-09-19T09:13:23
https://dev.to/nice01997007/iac-imx8mp-kit-main-functions-and-fields-45e3
programming, android, ram, linux
## IAC-IMX8MP-KIT IAC-IMX8MP-KIT Development Board, it adopts NXP IMX8MPlus series processor, the i.MX8M Plus family focuses on neural processing unit (NPU) and vision system, advance multimedia, and industrial automation with high reliability. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ykb7ar32by741iatbsx6.jpg) 1. The i.MX 8M Plus is a powerful quad Arm® Cortex®-A53 processor with speed up to 1.8 GHz integrated with an NPU of 2.3 TOPS that greatly accelerate machine learning. 2. The vision engine is composed of two camera inputs and an HDR-capable Image. Signal Processor (ISP) capable of 375 MPixels/s. 3. The advanced multimedia capabilities include 1080p60 video encode and decode H.265 and H.264. A 3D and 2D graphic acceleration supporting 1 GPixel/s, OpenVG 1.1, Open GL ES3.1, Vulkan. 4. Open CL 1.2 FP. Multiple audio and microphone interfaces for Immersive Audio and Voice. 5. For industrial applications, real time control is enabled by an integrated 800 MHz Arm® Cortex®-M7. 6. Robust control networks are possible via CAN-FD interfaces. 7. And a dual Gb Ethernet, one supporting Time Sensitive Networking (TSN), drive gateway applications with low latency. 8. High industrial system reliability for safety is leveraged by DRAM Inline ECC as well as ECC support on internal softwareaccessible SRAMs. ## IAC-IMX8MP-CM IAC-IMX8MP-CM core board adopts 8-layer PCB board high-precision immersion gold technology, high TG board, with reliable electrical performance and anti-interference performance. It integrated with CPU, LPDDR4, eMMC, power management chip, etc. The board-to-board connector leads to more than 200 pins, which fully expand the hardware resources of i.MX8MPlus, and can multiplex and combine different interface functions according to the pin conditions to make a bottom board that meets the needs. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8lo832vtn0tydp7g0t5x.jpg) ◆ Onboard NXP i.MX8M Plus processor ◆ Onboard 2GB LPDDR4, 16GB eMMC (default configuration, industrial grade) ◆ The core board adapts 8-layer PCB board high-precision immersion gold technology ◆ Core board size: 60mm*63mm, it is suitable for various embedded occasions ◆ The core board uses 2X140Pin board to board connectors to lead out the core board resources ◆ Using 5V power supply, onboard power management chip ◆ Support Linux 5.10.35;Qt5.15.2 ◆ Support Android 11 [](http://www.qiyangtech.com/)
nice01997007
1,197,215
The Emerging Role of Data Science and AI in Telecommunications
Introduction The importance of data science and AI to the sector is growing as it develops. Telecom...
0
2022-09-19T12:03:10
https://dev.to/subalak16742267/the-emerging-role-of-data-science-and-ai-in-telecommunications-30nh
Introduction The importance of data science and AI to the sector is growing as it develops. Telecom companies' infrastructure, network, and customer service operations produce massive amounts of data. Data science is now so widely used in telecom because of this. Data science and AI in telecom give operators the tools to interpret that data and use it for various purposes, including boosting reliability, cutting costs, and enhancing customer service. Demand for Data Science and AI in Telecom The need for data scientists is only growing due to the enormous amount of data that the telecom sector generates. According to Analytics Insight research, the telecommunications and IT sector holds a 33 percent market share and is driving explosive growth in big data. The organization estimates that spending on big data in telecom will increase from $59 billion in 2019 to over $105 billion in 2023, putting that into perspective. COVID-19 Driving the Demand for Data Science and AI in Telecom The COVID-19 pandemic has accelerated the demand for data science in telecom, which has been driven in recent years by the Internet of Things (IoT), the rollout of 5G, and growing consumer pressure for personalized services. Almost every sector that relies on digital communications, including schools, healthcare and pharma, government organizations, and the global supply chain, now more than ever recognizes the necessity of dependable connectivity. The telecom industry is increasingly embracing data science, AI, and automation to ensure that crucial communications in this new remote world remain smooth during the crisis, despite reduced staff and limited access to facilities like call centers and data centers. Additionally, by utilizing data analytics, telecom companies can react more quickly to today's rapidly changing environments. Check out the trending [Artificial intelligence course in Pune](https://www.learnbay.co/artificial-intelligence-ai-course-training-pune) offered by Learnbay. How Businesses Can Leverage Data Science and AI in Telecom Network Security Cybercriminals find telecommunications to be a very alluring target. After all, they connect to almost everything in the modern digital world through intricate international networks. They also keep a tonne of very private data in storage. Companies can use data science to view events in real-time, spot security anomalies, and conduct predictive analysis to identify where vulnerabilities are and how to mitigate them proactively. Additionally, businesses can analyze threat patterns to stop them before they spread too widely by utilizing machine learning in the telecom industry. Fraud Mitigation Customers who use telecom networks are also susceptible to cybercrime. Additionally, the pandemic is making things worse. The cost of fraud to the global telecom industry was estimated at a staggering $29 billion in 2018, according to a study by the Communications Fraud Control Association (CFCA). Fraud is so pervasive in the telecom sector. Big data in telecom allows businesses to analyze real-time data to pinpoint the origin of fraudulent transactions and link them to earlier activity to stop future counterfeiting. Network Optimization More people depend on network connectivity, particularly during COVID-19 outages, so telecoms must make sure that speed and performance are always at their best. They are utilizing data science, AI, and machine learning algorithms to find patterns in data that help them detect and predict irregularities before customers experience any service degradation in order to accomplish this. Customer Experience Personalization and prompt resolution of any issues customers may have are two crucial components. Telecoms use data science, AI, and analytics to ascertain what customers want based on their previous interactions and preferences. Through logical self-service menus, chatbots, and natural language processing (NLP) made possible by machine learning; telecoms also use AI to offer quick and intelligent customer service. Robotics Process Automation (RPA) RPA is widely used in the telecom industry to automate repetitive tasks, which reduces errors, saves labor and costs, and speeds up operations. There are several ways RPA can help telecom companies, according to CustomerThink, a global online community of business and thought leaders that regularly comments on customer-centric strategies. Supply Chain Management When the COVID-19 pandemic's initial global shortage of toilet paper became a reality for people sheltering in place, it was blamed on so-called hoarders. Although hoarding may have played a role, the main cause of this issue was the disruption in the global supply chain. The backbone of the global supply chain, telecommunications, needed to adapt to this disruption. Telecom companies were able to adjust to this abrupt change in demand to ease the pressure on the supply chain through big data analytics, data science, AI, and automation. Conclusion Since data science and AI in telecommunications are here to stay, businesses require qualified personnel to keep influencing communications' future. Upskilling online is great if you want to participate in this exciting effort while supporting crucial telecom infrastructure and services. If you desire to learn more about data science and AI, explore the IBM-accredited [data science course in Pune](https://www.learnbay.co/data-science-course-training-in-pune). Master the job-ready skills and secure your dream MAANG job.
subalak16742267
1,197,644
Drupal: Override Title Tag of Profiles
The profile module is a nice tool to have on a Drupal site if you're looking to create public-facing...
0
2022-09-19T19:53:46
https://ryanrobinson.technology/websites/drupal/override-title-tag-profiles/
php, drupal
The profile module is a nice tool to have on a Drupal site if you're looking to create public-facing profiles about your users (e.g. staff). But it has a few weak spots including being unable to change the URL alias - it can only be /profile/id - or the page's title which shows up as [Profile Name] #[id], e.g. "Staff Profile #1". That's not very helpful. There are two places that the profile title needs to be overridden: what appears in the main body of the page and the title tag for the page which you'll see in your browser address bar. In my case I wanted to show the first name and last name instead, so I started by creating those fields on the profile as standard text fields. With the fields in place, this post, and [accompanying GitHub code and configuration](https://github.com/ryan-l-robinson/Drupal-profile-title-override), is how I worked around those issues. ## The URL Alias Like other nodes, the URL can be changed using [the pathauto module](https://drupal.org/project/pathauto) to generate based on a pattern. Note that this takes a change in the pathauto configuration, which might not be obvious if like me you did the initial round of pathauto configuration long before adding profiles. Here's a screenshot of the settings page, which can be found at /admin/config/search/path/settings: ![Screenshot of entities settings screen with options to select Custom Block, Media, Custom Menu Link, Content, Profile, Taxonomy Term, and User](https://ryanrobinson.technology/assets/img/2022/07/Pathauto_entities.png) Once the ability to set paths for profiles is turned on, you can switch over to the Patterns tab and create the pattern. I made mine `/staff/[profile:field_first_name]-[profile:field_last_name]`. ## The Page Title I fixed the page title displayed within the body with a view. I also altered the display of a profile to not show the default title and not show the first name and last name otherwise. I won't break down every setting, but here's a screenshot of the view configuration: ![Screenshot of the view configuration including the fields First Name, Last Name, and Custom Text combining them](https://ryanrobinson.technology/assets/img/2022/07/Profile_Title_View.PNG) You can also see this in configuration YML form in the GitHub project's /sync/config/views.view.profile.yml file. Once the view is ready, add the block to the correct place in your theme, and turn off the standard Page Title block for those pages (based on the URL). ## The Browser Title The second one required a custom module, albeit a relatively simple one. This is the key part: ```php /** * Implements hook_preprocess_html * * Overrides the "Public Profile #[ID]" title with the first name and last name of the profiled staff member instead */ function profile_title_preprocess_html(&$variables) { if (stripos($variables['head_title']['title'],"Staff Profile") !== false) { //Get the ID from the original title to be replaced $profile_id = substr($variables['head_title']['title'],stripos($variables['head_title']['title'],"#") + 1); if (isset($profile_id)) { //Load the profile $profile = \Drupal::entityTypeManager()->getStorage('profile')->load($profile_id); if (isset($profile)) { $first_name = $profile->get('field_first_name')->getString(); $last_name = $profile->get('field_last_name')->getString(); if (isset($first_name) && isset($last_name)) { //Change the title to first name and last name $variables['head_title']['title'] = "$first_name $last_name"; } } } } } ``` Note: with PHP 8+ I used str_starts_with instead of stripos, but this works just as well for this purpose. Along with overriding the title, it also provides a warning to users. Because it is relying on overriding only when the current title follows a certain pattern, it is a little bit fragile in that changing the display title of the profile will result in the code no longer being activated. It won't break the site or anything, but will return to showing the default unhelpful title. Here's that code. It's fairly simple, firing on the hook for a profile edit form and then displaying a standard warning. ```php /** * Implements hook_form_FORM_ID_alter * * Adds a warning to the admin page for the profile, to advise against changing the title */ function profile_title_form_profile_type_edit_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) { $message = [ '#type' => 'container', '#markup' => '<p>Warning: Do not change the display label of the staff public profile without altering the corresponding code in the custom module profile_title.</p> <p>Failing to do so will result in the title of the profile page reverting back to showing the generic profile name instead of the staff member name.</p> ', ]; \Drupal::messenger()->addWarning($message); } ```
ryanr
1,197,672
Projects/GitHub
Best practices for collaborating with a group -communication is key -branching, pushing, and...
0
2022-09-19T21:22:37
https://dev.to/arbarrington/projectsgithub-5b2g
Best practices for collaborating with a group -communication is key -branching, pushing, and merging -80% planning, 20% coding ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pudam2vno2zgbbewoebp.png) https://whimsical.com/phase-3-brainstorming-BM5ooc1X74pNLtxmnZA52i@2bsEvpTYSt1HjGi1reMKufMh9aaSgohub2N ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lt7ghgm0fsy8lh919v8z.png)
arbarrington
1,197,796
C++ dasturlash tiliga kirish
Cout nima... Cout bu C++ dasturlash tilida output yoki print vazifasini ba'jaradi ya'ni berilgan...
0
2022-09-20T03:16:32
https://dev.to/xondamir02/c-dasturlash-tiliga-kirish-2lg5
cpp, beginners, programming
**Cout nima...** `Cout` bu C++ dasturlash tilida output yoki print vazifasini ba'jaradi ya'ni berilgan o'zgaruvchi yoki berilgan ma'lumotni consolega chiqarish uchun ishlatiladi. Masalan: ```cpp #include <iostream> using namespace std; int main() { cout << "Assalomu alaykum!"; return 0; } ``` Yuqoridagi kodda ` cout` consolega Assalomu alaykum jumlasini chiqaradi.
xondamir02
1,197,877
Why Hire HTML Developer From India? - Here Are The Reasons
India is a leading country for outsourcing IT services in the world. Well, there are many reasons why...
0
2022-09-20T05:29:25
https://dev.to/amansingh1/why-hire-html-developer-from-india-here-are-the-reasons-5g8d
hirehtml5developerinindia, html5developer, html
India is a leading country for outsourcing IT services in the world. Well, there are many reasons why businesses from all over the world are considering hiring professionals from India. According to statistics, India’s IT Outsourcing revenue is expected to reach $7.14 billion in 2022 and it is expected to show an annual growth rate (CAGR 2022-2027) of 11.80%, resulting in a volume of US $12.47 billion by 2027.   By 2023/2024, the country will have the highest number of software developers due to which it might overtake the USA in this matter. So, if you are also thinking about building a cutting-edge website, you must adopt the idea of hiring developers from India. Assigning your project work to professionals in a country like India is not only time-saving but can also be an ideal alternative for small-scale companies experiencing financial difficulties. The quality of talent available in India is exceptional. There is even assurance that you will receive outcomes that will meet your requirements.   To build interactive sites, choose HTML5 and **[hire a dedicated HTML5 developer](https://www.i-webservices.com/hire-html5-developers)** from India for complete development support. HTML is the most advanced and commonly used UI development technology helping businesses build interactive and efficient websites. Professionals combine HTML5 with supporting technologies such as CSS3 to create websites that communicate quickly and effectively and provide the best levels of user experience.   ##Benefits Of Hiring HTML5 Developers From India?   With the involvement of rich skills of Indian developers, a business can get websites that bring success to them. When you hire remote html5 developers from India, it comes with incredible benefits, including -   ###1. Cost Savings   Hiring developers from India is cheaper averaging $18-$30 per hour which is less as compared to European countries. So, if you want to make big savings, then hire talent from India. It will definitely be one of the wisest decisions of your life.     ###2. Easy Access To Expertise   In India, you will find developers with great expertise and knowledge. They have hands-on experience with the latest tools and technologies. Leveraging their knowledge, you will get what you have been looking for. Also, there will be no communication barrier since India currently is the second-largest English-speaking country in the world. The most reasonable estimate is that it accounts for 10% of its population or 125 million people. ###3. Use Of Advanced Technology Developers in India are constantly learning new technologies and improving their skills to keep up with changing spaces and serve clients with cutting-edge solutions. They aim to deliver the best to their customers and keep their customer base happy and satisfied. ###4. Delivery On Time   You can expect your Indian developers to be committed to delivering on-time service to the customers. Nurtured under the umbrella of moral values and good ethics, web developers in India are dedicated to their work and show a high level of professionalism that certainly adds to your reduced time to market. ### 5. Quality Assurance   Working at cheap prices, but it doesn’t mean Indian developers do not adhere to quality. You can rely on Indian developers and designers if you want high-quality service at a low price. They are vastly experienced and consistent learners and also leverage advanced technology to eventually deliver the intended results. ## Where To Hire HTML5 Developers From? If you want to hire an HTML developer from India, choose [iWebServices](https://www.i-webservices.com/). The company holds rich experience in developing advanced apps and websites within a stipulated time and budget. It houses the most experienced and knowledgeable professionals who leverage advanced technology and tools to deliver what customers desire. 
amansingh1
1,198,103
Reimagine log storage: Parseable
Context Whether you're a Developer or SRE or DevOps - when you're tasked with setting up...
0
2022-09-20T10:22:49
https://dev.to/parseable/reimagine-log-storage-parseable-4o08
rust, kubernetes, cloud, devops
## Context Whether you're a Developer or SRE or DevOps - when you're tasked with setting up logging, there are essentially two options: ### Search Engines Setup an indexing based search engine, masquerading as log storage. Such products are difficult to deploy and run in longer term - manage indexes, local and remote storage, different node types and so on. Additionally, indexing in the ingestion flow causes high CPU and Memory consumption, while denying very high ingestion rates. ### SaaS Platforms Alternatively, pay for an exorbitantly costly SaaS platform that is very easy to get started with. As time grows, data volumes and costs increase. But data gravity means getting out of this platform is difficult and you end up storing a fraction of log data to save costs. --- We dealt with both these options in our work and we know both these options are not ideal. This pushed us to think what would a modern, cloud native, log storage and observability platform look like. > It is clear that log storage of future won't be another index based search engine in a new language. We set out to build a completely indexing free log storage platform. This led to [Parseable](https://github.com/parseablehq/parseable). ## Introduction Parseable is a simple, efficient and fast log storage and observability platform. Think the simplicity of Prometheus but for Logs. Written in Rust, Parseable leverages Apache Arrow, Parquet and widely available object storage platforms for efficiency, cost effectiveness and performance. It is compatible with standard logging agents like FluentBit, LogStash etc. Parseable also offers a builtin, intuitive GUI for log query and analysis. ![Parseable Design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/di7ryav5bqds6x84j4gm.png) ## Conclusion As we launched Parseable, we see tremendous interest in the community to try this out. We'd love for you to try out Parseable and we're all ears for any questions, feedback, and comments. Get Started: https://www.parseable.io/docs/quick-start Slack: https://launchpass.com/parseable Github: https://github.com/parseablehq/parseable Documentation: https://www.parseable.io/docs/introduction
nitisht
1,198,481
Just Testing
Just writing a quick test post before linking to my GitHub.
0
2022-09-20T18:01:05
https://dev.to/cafecodr/just-testing-1k5o
markdown, github
Just writing a quick test post before linking to my GitHub.
cafecodr
1,198,484
Apache Web Gateway with Docker
Apache Web Gateway with Docker Hi, community. In this article, we will programmatically...
0
2022-09-20T18:16:40
https://community.intersystems.com/post/apache-web-gateway-docker
beginners, devops, webgateway, programming
# Apache Web Gateway with Docker Hi, community. In this article, we will programmatically configure an Apache Web Gateway with Docker using: * HTTPS protocol. * TLS\SSL to secure the communication between the Web Gateway and the IRIS instance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z56klvoqxmqw2wv77ntr.png) We will use two images: one for the Web Gateway and the second one for the IRIS instance. All necessary files are available in this [GitHub repository](https://github.com/lscalese/docker-webgateway-sample). Let’s start with a git clone: ```bash git clone https://github.com/lscalese/docker-webgateway-sample.git cd docker-webgateway-sample ``` ## Prepare your system To avoid problems with permissions, your system needs a user and a group: * www-data * irisowner It’s required to share certificates files with the containers. If they don’t exist on your system, simply execute: ```bash sudo useradd --uid 51773 --user-group irisowner sudo groupmod --gid 51773 irisowner sudo useradd –user-group www-data ``` ## Generate certificates In this sample, we will use three certificates: 1. HTTPS web server usage. 2. TLS\SSL encryption on Web Gateway client. 3. TLS\SSL encryption on IRIS Instance. A script ready-to-use is available to generate them. However, you should customize the subject of the certificate; simply edit the [gen-certificates.sh](https://github.com/lscalese/docker-webgateway-sample/blob/master/gen-certificates.sh) file. This is the structure of OpenSSL `subj` argument: 1. **C**: Country code 2. **ST**: State 3. **L**: Location 4. **O**: Organization 5. **OU**: Organization Unit 6. **CN**: Common name (basically the domain name or the hostname) Feel free to change these values. ```bash # sudo is needed due chown, chgrp, chmod ... sudo ./gen-certificates.sh ``` If everything is ok, you should see two new directories `./certificates/` and `~/webgateway-apache-certificates/` with certificates: | File | Container | Description | |--- |--- |--- | | ./certificates/CA_Server.cer | webgateway,iris | Authority server certificate| | ./certificates/iris_server.cer | iris | Certificate for IRIS instance (used for mirror and wegateway communication encryption) | | ./certificates/iris_server.key | iris | Related private key | | ~/webgateway-apache-certificates/apache_webgateway.cer | webgateway | Certificate for apache webserver | | ~/webgateway-apache-certificates/apache_webgateway.key | webgateway | Related private key | | ./certificates/webgateway_client.cer | webgateway | Certificate to encrypt communication between webgateway and IRIS | | ./certificates/webgateway_client.key | webgateway | Related private key | Keep in mind that if there are self-signed certificates, web browsers will show security alerts. Obviously, if you have a certificate delivered by a certified authority, you can use it instead of a self-signed one (especially for the Apache server certificate). ## Web Gateway Configuration files Take a look at the configuration files. ### CSP.INI You can see a CSP.INI file in the `webgateway-config-files` directory. It will be pushed into the image, but the content can be modified at runtime. Consider this file as a template. In this sample the following parameters will be overridden on container startup: * Ip_Address * TCP_Port * System_Manager See [startUpScript.sh](https://github.com/lscalese/docker-webgateway-sample/blob/master/startUpScript.sh) for more details. Roughly, the replacement is performed with the `sed` command line. Also, this file contains the SSL\TLS configuration to secure the communication with the IRIS instance: ``` SSLCC_Certificate_File=/opt/webgateway/bin/webgateway_client.cer SSLCC_Certificate_Key_File=/opt/webgateway/bin/webgateway_client.key SSLCC_CA_Certificate_File=/opt/webgateway/bin/CA_Server.cer ``` These lines are important. We must ensure the certificate files will be available for the container. We will do that later in the `docker-compose` file with a volume. ### 000-default.conf This is an Apache configuration file. It allows the use of HTTPS protocol and redirects HTTP calls to HTTPS. Certificate and private key files are setup in this file: ``` SSLCertificateFile /etc/apache2/certificate/apache_webgateway.cer SSLCertificateKeyFile /etc/apache2/certificate/apache_webgateway.key ``` ## IRIS instance For our IRIS instance, we configure only the minimal requirement to allow the SSL\TLS communication with the Web Gateway; it involves: 1. `%SuperServer` SSL Config. 2. Enable SSLSuperServer security setting. 3. Restrict the list of IPs that can use the Web Gateway service. To ease the configuration, config-api is used with a simple JSON configuration file. ```json { "Security.SSLConfigs": { "%SuperServer": { "CAFile": "/usr/irissys/mgr/CA_Server.cer", "CertificateFile": "/usr/irissys/mgr/iris_server.cer", "Name": "%SuperServer", "PrivateKeyFile": "/usr/irissys/mgr/iris_server.key", "Type": "1", "VerifyPeer": 3 } }, "Security.System": { "SSLSuperServer":1 }, "Security.Services": { "%Service_WebGateway": { "ClientSystems": "172.16.238.50;127.0.0.1;172.16.238.20" } } } ``` There is no action needed. The configuration will be automatically loaded on container startup. ## Image tls-ssl-webgateway ### dockerfile ``` ARG IMAGEWEBGTW=containers.intersystems.com/intersystems/webgateway:2021.1.0.215.0 FROM ${IMAGEWEBGTW} ADD webgateway-config-files /webgateway-config-files ADD buildWebGateway.sh / ADD startUpScript.sh / RUN chmod +x buildWebGateway.sh startUpScript.sh && /buildWebGateway.sh ENTRYPOINT ["/startUpScript.sh"] ``` By default the entry point is `/startWebGateway`, but we need to perform some operations before starting the webserver. Remember that our CSP.ini file is a `template`, and we need to change some parameters (IP, port, system manager) on starting. `startUpScript.sh` will perform these changes and then execute the initial entry point script `/startWebGateway`. ## Starting containers ### docker-compose file Before starting containers, the `docker-compose.yml` file must be modified: * `**SYSTEM_MANAGER**` must be set with the IP authorized to have an access to **Web Gateway Management** https://localhost/csp/bin/Systems/Module.cxw Basically, it's your IP address (It could be a comma-separated list). * `**IRIS_WEBAPPS**` must be set with the list of your CSP applications. The list is separated by space, for example: `IRIS_WEBAPPS=/csp/sys /swagger-ui`. By default, only `/csp/sys` is exposed. * Ports 80 and 443 are mapped. Adapt them to other ports if they are already used on your system. ``` version: '3.6' services: webgateway: image: tls-ssl-webgateway container_name: tls-ssl-webgateway networks: app_net: ipv4_address: 172.16.238.50 ports: # change the local port already used on your system. - "80:80" - "443:443" environment: - IRIS_HOST=172.16.238.20 - IRIS_PORT=1972 # Replace by the list of ip address allowed to open the CSP system manager # https://localhost/csp/bin/Systems/Module.cxw # see .env file to set environement variable. - "SYSTEM_MANAGER=${LOCAL_IP}" # the list of web apps # /csp allow to the webgateway to redirect all request starting by /csp to the iris instance # You can specify a list separate by a space : "IRIS_WEBAPPS=/csp /api /isc /swagger-ui" - "IRIS_WEBAPPS=/csp/sys" volumes: # Mount certificates files. - ./volume-apache/webgateway_client.cer:/opt/webgateway/bin/webgateway_client.cer - ./volume-apache/webgateway_client.key:/opt/webgateway/bin/webgateway_client.key - ./volume-apache/CA_Server.cer:/opt/webgateway/bin/CA_Server.cer - ./volume-apache/apache_webgateway.cer:/etc/apache2/certificate/apache_webgateway.cer - ./volume-apache/apache_webgateway.key:/etc/apache2/certificate/apache_webgateway.key hostname: webgateway command: ["--ssl"] iris: image: intersystemsdc/iris-community:latest container_name: tls-ssl-iris networks: app_net: ipv4_address: 172.16.238.20 volumes: - ./iris-config-files:/opt/config-files # Mount certificates files. - ./volume-iris/CA_Server.cer:/usr/irissys/mgr/CA_Server.cer - ./volume-iris/iris_server.cer:/usr/irissys/mgr/iris_server.cer - ./volume-iris/iris_server.key:/usr/irissys/mgr/iris_server.key hostname: iris # Load the IRIS configuration file ./iris-config-files/iris-config.json command: ["-a","sh /opt/config-files/configureIris.sh"] networks: app_net: ipam: driver: default config: - subnet: "172.16.238.0/24" ``` Build and start: ```bash docker-compose up -d --build ``` Containers `tls-ssl-iris and tls-ssl-webgateway should be started.` ## Test Web Access ### Apache default page Open the page [http://localhost](http://localhost). You will be automatically redirected to [https://localhost](https://localhost). The browsers show security alerts. This is the standard behaviour with a self-signed certificate, accept the risk and continue. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54gf37i2sa51rulabgkn.png) ### Web Gateway management page Open [https://localhost/csp/bin/Systems/Module.cxw](https://localhost/csp/bin/Systems/Module.cxw) and test the server connection. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjzwz0a9sok3mec12d43.png) ### Management portal Open [https://localhost/csp/sys/utilhome.csp](https://localhost/csp/sys/utilhome.csp) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfzt5tvscpk6ujsl9dfl.png) Great! The Web Gateway sample is working! ## IRIS Mirror with Web Gateway In the previous article, we built a mirror environment, but the Web Gateway was a missing piece. Now, we can improve that. A new repository [iris-miroring-with-webgateway](https://github.com/lscalese/iris-mirroring-with-webgateway) is available including Web Gateway and a few more improvements: 1. Certificates are no longer generated on the fly but in a separate process. 2. IP Addresses are replaced by environment variables in docker-compose and JSON configuration files. Variables are defined in the '.env' file. 3. The repository can be used as a template. See the repository [README.md](https://github.com/lscalese/iris-mirroring-with-webgateway) file to run an environment like this: ![image](https://github.com/lscalese/iris-mirroring-with-webgateway/blob/master/img/network-schema-01.png?raw=true)
intersystemsdev
1,198,669
What is a REST API?
REpresentational State Transfer (REST) is an architectural style that handles the client-server...
18,888
2022-09-20T23:50:42
https://dev.to/rembertdesigns/what-is-a-rest-api-4257
productivity, programming, javascript, webdev
**RE**presentational **S**tate **T**ransfer (**REST**) is an architectural style that handles the client-server relationship, with the purpose of aiming for speed and performance by using re-usable components. REST as a technology was introduced into the world in a 2000 doctoral [dissertation by Roy Fielding](https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm). Nowadays it is generally preferred to SOAP (Simple Object Access Protocol) as REST uses less bandwidth and is simpler and more flexible for internet usage. We can use it to fetch or give some information from a web service, this is done via an HTTP request to the REST API. ## What is a REST API? A **REST API** is a way of easily accessing web services without having excess processing. Whenever a RESTful API is called, the server will transfer to the client a representation of the state of the requested resource. In fact, we use this just about every day! If you’re trying to find videos about biking on YouTube. You’d type “biking” into the YouTube search field, hit enter, and you’ll then see a list of videos about biking. Conceptually, a REST API works just like this! You search for something, and you get a list of results back from the service that you’re requesting from. An **API** is an application programming interface. It’s a set of rules that allow programs to communicate with each other. The developer creates the API on the server and allows the client to talk to it. **REST** is what determines how the API looks. It is the rules that developers follow when they create an API. One of these rules states that you should be able to get a piece of data (a resource) when you link to a specific URL. Each URL is called a **request** while the data sent back to you is called a **response**. ## RESTful Architecture So what are the basic features of REST? - **Stateless**: Meaning the client data is not stored on the server, the session is stored client-side (typically in session storage). - **Client<->Server**: There is a separation of concerns between the front-end (client) and the back-end (server). They operate independently of each other and both are replaceable. - **Cache**: Data from the server can be cached on the client, which can improve performance speed. - **URL Composition**: We use a standardized approach to the composition of base URLs. For example, a `GET` request to `/cities`, should yield all the cities in the database, whereas a `GET` request to `/cities/portland` would render the city with an ID of Portland. Similarly, REST utilizes standard methods like `GET`, `PUT`, `DELETE` and `POST` to perform actions, which we’ll take a look at in the next section! So we can define a RESTful API as one that is **stateless**, it **separates concerns between client-server**, it **allows caching of data client-side** and it utilizes **standardized base URLs and methods** to perform the actions required to manipulate, add or delete data. ## REST in Action Let’s now take a closer look at how this is done! Our request is sent from the client to the server via HTTP in the form of a web URL. Using either GET, POST, PUT or DELETE. Then a response is sent back from the server in the form of a resource, which could be anything like HTML, XML, Image, or JSON. JSON is by far the most popular format, so we’ll be using that for our example. ![REST API Model](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dwwpywtizmvv3qf1ck35.png) **HTTP** has five methods that are commonly used in a REST-based architecture: POST, GET, PUT, PATCH, and DELETE. In fact, they correspond to create, read, update, and delete (CRUD) operations respectively. It should also be noted that there are other methods that are less frequently used, such as OPTIONS and HEAD. - **GET**: This method is used to **read** (or retrieve) a representation of a resource. If all is well, GET returns a representation in XML or JSON and an HTTP response code of 200 (OK). In an error case, it most often returns a 404 (NOT FOUND) or 400 (BAD REQUEST). - **POST**: This method is often utilized to **create** new resources. In particular, it’s used to create subordinate resources. That is, subordinate to some other (e.g. parent) resource. On successful creation, it returns HTTP status 201, returning a location header with a link to the newly-created resource with the 201 HTTP status. - **PUT**: It’s used for **updating** capabilities and also to **create** a resource (in the case where the resource ID is chosen by the client instead of the server). Essentially PUT is to a URL that contains the value of a non-existent resource ID. A successful update returns 200 (or 204 if not returning any content in the body) from a PUT. If using PUT for create, it returns HTTP status 201 on successful creation. - **PATCH**: It’s used for **modify** capabilities. The PATCH request only needs to contain the changes to the resource, not the complete resource. This is similar to PUT, however the body contains a set of instructions describing how a resource currently residing on the server should be modified to produce a new version. So the PATCH body should not just be a modified part of the resource but in some kind of patch language like JSON Patch or XML Patch. - **DELETE**: Fairly self-explanatory, it’s used to **delete** a resource identified by a URL. Upon successful deletion, it returns HTTP status 200 (OK) along with a response body. ## Working with REST Data Furthermore, it has become common practice for REST APIs to also return data in a standard format. As mentioned the most popular format nowadays is JSON (JavaScript Object Notation). The standardization of the formatting of the data is another step towards uniformity in the way resources are interacted with on the web, allowing for developers to solve problems and not spend their time in configuration of the basic architecture! When requesting data from an API you might get back something like this: ```js { title: "Hi, I am JSON", content: [ chapter: "1", page: "150", firstParagraph: "I am JSON, this is what I look like when I am returned from an API." ], author: "Richard Rembert" } ``` This format allows for easy access to the data within JSON, using dot notation such as `data.title`, which returns "Hi, I am JSON". Where can you find RESTful APIs? Everywhere! Twitter, [Google](https://developers.google.com/drive/api/v2/reference), [Open Weather Map](https://openweathermap.org/api), [YouTube](https://developers.google.com/youtube/v3). Most of the popular services we use daily utilize a RESTful architecture for their API service so go forth & explore the world of adding API functionality to your websites & apps! ## Summary We’ve taken a look at what REST is, as well as the principles which govern its architecture. We’ve looked at how REST works with APIs to send and receive data from client to server & back again. We’ve also taken a look at the JSON format, which we’ll most often be working with when accessing and manipulating our data! ## Conclusion If you liked this blog post, follow me on [Twitter](https://twitter.com/RembertDesigns) where I post daily about Tech related things! ![Buy Me A Coffee](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijinng8k7bf23v0o6gkj.png) If you enjoyed this article & would like to leave a tip — click [here](https://www.buymeacoffee.com/rembertdesigns) ### 🌎 Let's Connect - [Portfolio](https://www.rembertdesigns.co/) - [Twitter](https://twitter.com/RembertDesigns) - [LinkedIn](https://www.linkedin.com/in/rrembert/) - [Hashnode](https://rembertdesigns.hashnode.dev/) - [Devto](https://dev.to/rembertdesigns) - [Medium](https://medium.com/@rembertdesigns) - [Github](https://github.com/rembertdesigns) - [Codepen](https://codepen.io/rembertdesigns)
rembertdesigns
1,198,794
Answer: Add row number column to jquery datatables
answer re: Add row number column to jquery...
0
2022-09-21T02:27:46
https://dev.to/codeirawan/answer-add-row-number-column-to-jquery-datatables-188b
{% stackoverflow 38712667 %}
codeirawan
1,198,927
My First Coding Blog
Hey everybody! Everybody? Right. Like I have a huge following for my DEV blog. Let me try...
0
2022-09-21T06:09:20
https://dev.to/kevinlearnstocode/my-first-coding-blog-27p1
Hey everybody! Everybody? Right. Like I have a huge following for my DEV blog. Let me try again. Hey fellow cohort members! So I decided to just pick a lab and write about the process of completing it for my first blog. Very exciting. And the lucky lab is... Phase 1 Review Strings Lab!!! Since this is review of things I should have already learned in the prework, I'm just going to fork the lab, clone it, open it in VSC, take a look at index.test.js and then go from there. OK. So the test indicates I'll need a current user, a few different welcome messages and.... yeah, I'm just gonna make life easy and run learn test before I write any code and see exactly what they're looking for. 10 failed tests! Great start. So, the first failed test is "currentUser is not defined". That one's easy enough. const currentUser = "Kevin Price"; My own name... the vanity. learn test. Alright. One passing test. Let me see if I can knock a couple of tests out in a row. I don't need this blog to be ten pages long. Looking over the failed tests I need a welcomeMessage that includes currentUser and an exclamation point. Easy enough. const welcomeMessage = "Welcome to Flatbook, " + currentUser + "!"; ...should I have used interpolation? I think I probably should have used interpolation. But, it's just review... Let's just run learn test again and see what happens! HA! Four passing tests! Suck it interpolation! Six more tests to go. Let's see what's next. excitedWelcomeMessage. All Caps. Contains currentUser. Exclamation point. How about? const excitedWelcomeMessage = "WELCOME TO FLATBOOK, " + currentUser + "!"; Let's see! Six tests passing! Not bad, but it looks like there's something wrong with the code I just put in. Ohhhh. The currentUser name is also supposed to be capitalized. Wait. I know this. There's an easy way to do this with a string... do I keep trying to remember or just google it? Life is short. I'm googling. toUpperCase!! Yep. That was it. OK so now do I need to do a separate const for the uppercase name? This is where I'm regretting not using interpolation. Or can I just attach it directly to the... const excitedWelcomeMessage = "WELCOME TO FLATBOOK, " + currentUser.toUpperCase() + "!"; HAHAHAHAHA!! Seven tests passed! Now let's see what these last three tests are. shortGreeting and just the first initial of the currentUser name. OK. So... can I just treat currentUser like an array? Don't strings usually behave like arrays? Right? Seems a bit too easy, but... const shortGreeting = "Welcome, " + currentUser[0] + "!"; 10 passing!!!! Yes. Never a doubt. Oh. And I now notice that the lab actually walks you through the process... and they wanted me to use interpolation. Oh, and I was supposed to use slice for getting the first initial. Apparently those would have made my code more flexible. Oh well. Maybe next time!
kevinlearnstocode
1,198,973
Search Insert Position Python – Leetcode Solutions
Solution to the problem Search insert Position in Python is...
0
2022-09-21T07:59:00
https://dev.to/hecodesit/search-insert-position-python-leetcode-solutions-11gb
search, insert, position, python
Solution to the problem Search insert Position in Python is here. https://hecodesit.com/search-insert-position-python-leetcode-solutions/
hecodesit
1,199,205
“API” Meaning - Definition in Computer Programming
Learn the definition of the term "API" and where it came from
0
2022-09-21T13:52:41
https://dev.to/patrickdavid/api-meaning-definition-in-computer-programming-396b
api, programming, dictionary
--- title: “API” Meaning - Definition in Computer Programming published: True description: Learn the definition of the term "API" and where it came from tags: API, Programming, Dictionary # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. --- What does the term "API" stand for? How has it become a regular part of our vocabulary? Learn what an API stands for and what it means in this short guide. ## What is an "API?" API stands for ["Application Programming Interface."](https://en.wikipedia.org/wiki/API) An API is a set of programming instructions that allow the software to interact with other software. In other words, an API is a way for two pieces of software to communicate with each other. APIs are often used to connect different pieces of software. For example, the [Google Maps API](https://developers.google.com/maps) allows developers to add Google Maps functionality to their websites and applications. The Twitter API allows developers to access Twitter data and create Twitter-based applications. APIs are also used to allow the software to interact with hardware. For example, the Arduino API allows developers to write code that interacts with [Arduino devices](https://www.arduino.cc/en/hardware). ### How has the abbreviation "API" become part of our regular vocabulary?### The term "API" has become a regular part of our vocabulary because more and more software is being created that relies on APIs to function. In addition, software engineering has increased as a job function by more than 300% in the past 6-years in the United States. As the world becomes increasingly connected, APIs will only become more critical. Dalia Yashinsky, one of the experts in linguistics and English at [GrammarBrain](https://grammarbrain.com), said, "It's becoming more common for internet abbreviations to enter our daily lives. From "OMG" to "LOL," these abbreviations have become commonly understood replacements for idioms. The abbreviation for 'API' in software engineering might not be known, but its meaning is commonly understood." ### Who came up with the first "API?" [Tim Berners-Lee](https://www.w3.org/People/Berners-Lee/) created the first API in the early days of the internet. Berners-Lee is credited with inventing the World Wide Web and creating the first API. His API allowed different pieces of software to communicate with each other over the internet. ### What does a "Rest API" stand for? A "REST API" represents a "Representational State Transfer API." A REST API is an API that uses simple HTTP requests to GET, POST, PUT, and DELETE data. REST APIs are often used to create web-based applications. ### What is an "API Key?" An API Key is a unique code used to access an API. The API provider typically assigns API Keys. For example, the Google Maps API requires an API Key to use their API. ### What is an "API Endpoint?" An API Endpoint is a URL that is used to access an API. For example, the Google Maps API has the following endpoint: [https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&key=YOUR_API_KEY](https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&key=YOUR_API_KEY) ### What is an "API Request?" An API Request is a request made to an API Endpoint. When you request an API, you are typically requesting data. For example, if you request the Google Maps API for directions from San Francisco to Los Angeles, you are requesting data (the directions). ### What is an "API Response?" An API Response is the data that is returned from an API Request. In the example above, the data returned from the Google Maps API would be the directions from San Francisco to Los Angeles. Common questions Questions about the abbreviation "API." ## Is "API" an abbreviation? Yes, "API" is an abbreviation. It stands for "Application Programming Interface." ### What does "API" mean in business? "API" stands for "Application Programming Interface." An API is a set of programming instructions that allow the software to interact with other software. In other words, an API is a way for two pieces of software to communicate with each other. ### What does "API" mean in programming? The abbreviation "API" in business and programming means the same thing. "API" stands for "Application Programming Interface." An API is a set of programming instructions that allow the software to interact with other software. In other words, an API is a way for two pieces of software to communicate with each other. ### What does an "Open API" mean? An Open API is an API that is publicly available and does not require authentication to access. ### What does a "Private API" mean? A Private API is an API that requires authentication to access. ### What is an "API Management System?" An API Management System is a software system that helps manage APIs. An API Management System typically provides authentication, rate-limiting, and analytics features. ### What is an "API Gateway?" An API Gateway is a software system that acts as a gateway for API requests. An API Gateway typically provides authentication, rate-limiting, and load-balancing features. What is an "API Proxy?" An API Proxy is a software system that acts as a proxy for API requests. An API Proxy typically provides features such as authentication and rate-limiting. ### What is an "API Portal?" An API Portal is a website that provides documentation and other resources for an API. ### What is an "API Developer Portal?" What does "API" mean in the pharma industry? The "API" in the pharma industry stands for Active Pharmaceutical Ingredient. The API is the part of the drug that has the intended therapeutic effect. Not as frequently confused since the trend for the "API" term in programming has sustained in usage since 2004. Syndicated from [GoogleAPIs.com](https://storage.googleapis.com/about-apis/blog/api-meaning-definition-in-computer-programming.html)
patrickdavid
1,199,503
Debunking Common Misconceptions About Passwordless Authentication
Original post written by Mallory Sword Glenn and Salman Ladha for Auth0 blog. Increase...
0
2022-09-21T19:36:03
https://auth0.com/blog/debunking-common-misconceptions-about-passwordless-authentication/?utm_source=devto&utm_medium=sc&utm_campaign=devto
security, programming, devops
> Original post written by [Mallory Sword Glenn](https://auth0.com/blog/authors/mallory-sword-glenn/) and [Salman Ladha](https://auth0.com/blog/authors/salman-ladha/) for Auth0 blog. ### Increase user security, convenience, and privacy by enabling authentication using device biometrics ### <br> A future where passwords no longer exist may be right around the corner—for real, this time. Earlier this year, ironically, on [World Password Day](https://www.firstpost.com/world/world-password-day-2022-history-and-importance-of-passwords-in-current-times-10635471.html), Apple, Google, and Microsoft collectively announced plans to [extend their support](https://www.forbes.com/sites/davidbirch/2022/07/25/thanks-to-apple-microsoft-and-google-passwords-will-finally-die/?sh=31f8e4a92072) for passwordless authentication, building from the specification created by the [FIDO Alliance](https://fidoalliance.org/specifications/#:~:text=FIDO%20UAF%20supports%20a%20passwordless,%2C%20entering%20a%20PIN%2C%20etc.) and the [World Wide Web Consortium (W3C)](https://www.w3.org/2019/03/pressrelease-webauthn-rec.html.en). Through a technology called [Passkey](https://www.trustedreviews.com/explainer/what-is-passkey-4231178), users will be able to authenticate into compatible websites and applications by taking the same action they use to unlock their phones. This eliminates the need to remember a password. For any consumer-facing business where digital engagement has become a crucial component of the [customer experience](https://www.pwc.com/us/en/zz-test/assets/pwc-consumer-intelligence-series-customer-experience.pdf), this announcement highlights an important technology trend for future innovation in their overall customer identity and access management (CIAM) strategies. Most consumers don't like remembering hundreds of passwords, so this is a prime opportunity to promote the adoption of passwordless authentication. In light of that, we thought we'd break down a handful of common misconceptions associated with passwordless authentication, specifically using device biometrics. As we gear up for a future where arbitrary strings of characters will perhaps take a back seat in how we log in. ![Passwordless Misconception](https://images.ctfassets.net/23aumh6u8s0i/50DnWYiAPCa18mVZOfrOBP/c29ceb90bd492ae124f89ff23ca1f237/Debunking_Misconceptions_About_Passwordless_Separator_V01_3x.jpg) ## Misconception #1: Passwordless Is Not Secure Since its inception in the 1960s, the username and password challenge has been the de facto experience for how we log in to applications. As a result, it's only natural to feel like anything without a password is insecure. The reality is that we've been tricked into a false sense of security. When we look at the data, passwords consistently pose security challenges. [Nordpass](https://nordpass.com/most-common-passwords-list/) highlights that the average consumer must remember around 100 passwords for all their online accounts. Due to the sheer volume of credentials, we have to remember [that 86% of consumers admit to reusing a password](https://info.auth0.com/expectation-vs-reality_confirm.html), which presents a massive opportunity for attackers. The [2022 Verizon Data Breach Investigation Report](https://www.verizon.com/business/resources/reports/dbir/) found that almost half of all data breaches start with stolen credentials. Unfortunately, the financial and social cost of these breaches can cost a business an average of [six million dollars annually](https://www.akamai.com/lp/report/ponemon-the-cost-of-credential-stuffing-report). In an environment where password reuse among consumers is the norm, cybercriminals are capitalizing on poor behavior, and companies are suffering the consequences, passwords are proving to be a less than ideal form of authentication. Passwordless authentication using [WebAuthn](https://webauthn.guide/) (a specification written by W3C and FIDO) device biometrics presents a unique solution to this problem as it's effectively a two-factor authentication experience. Rather than having users authenticate based on something they know, they log in using something they have (the device) and something they are (their biometric information). This is why some sources go as far as saying passwordless authentication with WebAuthN device biometrics is the only standards-based authentication method that is [unphishable](https://www.w3.org/2018/Talks/06-WebAuthn.pdf). >💡 <span style="text-decoration:underline">Reality</span>: Passwordless authentication using device biometrics is actually more secure than username and password credentials because it's a 2FA experience. ## Misconception #2: Passwordless Doesn't Benefit the Business On the surface, the relationship between passwordless authentication and business value might not be obvious. The friction consumers experience is the key to debunking this myth. CIAM has evolved from being seen as a cost center line item to a revenue-generating activity due to the positive impact it can have on increasing user conversions as consumer applications have become ubiquitous and central to most aspects of everyday life, [every signup and sign-in is a built-in opportunity](https://auth0.com/resources/whitepapers/CIAM-conversion-retention) to engage with customers. Historically, identity was solely the responsibility of IT teams. Now that customer identity offers an opportunity to provide seamless experiences at every touchpoint in the customer journey, it has become the responsibility and consideration of sales and marketing teams as well. If a customer is frustrated by the signup process, as [83% of respondents](https://info.auth0.com/expectation-vs-reality) are, according to an Auth0 survey, these customers will abandon what they're doing in search of a friction-free registration and login process. Revenue is on the line; [88% of online shoppers](https://www.smallbizgenius.net/by-the-numbers/ux-statistics/#gref), for example, report that they would not return to a website after having a bad experience. A good experience starts from the first click, and passwordless frees users from having to create yet another username and password—[a source of frustration for 53% of global consumers](https://www.intelligentcio.com/apac/2021/05/06/auth0-survey-reveals-frustrations-with-password-management/). [Read more...](https://auth0.com/blog/debunking-common-misconceptions-about-passwordless-authentication/?utm_source=devto&utm_medium=sc&utm_campaign=devto)
robertinoc_dev
1,199,655
How to Build a pyproject.toml File
In this tutorial I'll be walking you through how to build a simple pyproject.toml file. I'll be...
0
2022-09-22T15:21:14
https://dev.to/2320sharon/how-to-build-a-pyprojecttoml-file-4mk8
python, pyproject, pypi, tutorial
In this tutorial I'll be walking you through how to build a simple `pyproject.toml` file. I'll be including a sample `pyproject.toml` as well as include links to some resources I found very helpful when learning how to construct my own `pyproject.toml`. First, I'll be showing you the full `pyproject.toml` then I'll break down the purpose of each section of the file. `Setuptools` has a [fantastic page](https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html) that explains more about `pyproject.toml` and how it can be used with `setuptools`. ## Sample `pyproject.toml` ``` # tells pip what build tool to use to build your package [build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta" # tells pip how to build your pypi webpage & what dependencies to install [project] name = "sample_pkg" dynamic = ["readme"] version = "0.0.30" authors = [ { name="Sharon Fitzpatrick", email="sharon.fitzpatrick23@gmail.com" }] description = "A tool that performs xyz" dependencies = ["matplotlib", "numpy<1.23.0"] license = { file="LICENSE" } requires-python = ">=3.8" classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ] # (BETA) tells setuptools you will be using a readme file for the long description field for your pypi profile. [tool.setuptools.dynamic] readme = {file = ["README.md"]} # (OPTIONAL) tells pypi that these urls are where your project's source code and issue tracker reside [project.urls] "Homepage" = "https://github.com/pypa/packaging.python.org" "Bug Tracker" = "https://github.com/pypa/packaging.python.org/issues" ``` ## [build-system] - Tells pip what build tool to use to build your pip package. You can choose build tools like `poetry`,`hatchling`,`setuptools`, etc. to build your package. Without the `build-system` pip would have to guess what tools you used to create your package. If you want to learn more about the `pyproject.toml` file check [pip's documentation](https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/) ### `requires = ["setuptools>=61.0"]` - tells pip exactly what versions of `setuptools` are compatible to build the package ### `build-backend = "setuptools.build_meta"` - tells pip that you will be using `setuptools` to build your package ## [project] - This section tells pip the metadata for your package. The metadata for your package is information that describes your package like your package's name, version number, and dependencies. ### `name = "sample_pkg"` - Tells pip the name of your package. This must be a unique name on pypi. ### `dynamic = ["readme"]` - Tells setuptools that you will be creating the `long description` dynamically from the readme file. The `long description` is what is displayed on your pypi's project page. ### `version = "0.0.30"` - Tells pip is the current version of the package ### `authors = [{ name="Sharon Fitzpatrick",email="sharon.fitzpatrick23@gmail.com" }]` - Tells pip is the author of the package - You must include both the authors name and email for this to work properly ### `description = "A tool that performs xyz"` - Tells pip a short description which will be displayed on your package's pypi page. ### `dependencies = ["matplotlib","numpy<1.23.0"]` - Tells pip what dependencies your package needs to run - You can even specify the version of packages that are compatible with your package ### `license = { file="LICENSE" }` - Tells pip you will be using the file named LICENSE in your repository as your license ### `requires-python = ">=3.8"` - Tells pip what python version your package requires ### `classifiers = ["Programming Language :: Python :: 3","License :: OSI Approved :: MIT License","Operating System :: OS Independent",]` - Used by PyPi categorize each package's release, it describes who the package is for and what systems it can run on, and how mature package is. - You can find the [full list of classifiers from pypi](https://pypi.org/classifiers/). ## [tool.setuptools.dynamic] - This section is specific to `setuptools` and still in beta. [Read more here.](https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html#:~:text=%22my_package%22%5D-,Dynamic%20Metadata,-%23) - It tells `setuptools` that the following fields will be populated dynamically by files included into the repository ### `readme = {file = ["README.md"]}` - Tells `setuptools` you will be using a readme file for the long description field for your pypi profile. - **NOTE:** your README file must be named exactly README. - **NOTE:** a GitHub styled `README.md` may not render correctly on `pypi` ## [project.urls] - This is an optional section that tells pypi the urls associated with your package ### "Homepage" = "https://github.com/pypa/packaging.python.org" - This tells pypi where your package is from (optional) ### "Bug Tracker" = "https://github.com/pypa/packaging.python.org/issues" - This tells pypi where issues and bugs for your project are being tracked (optional) ## Ready to upload to PyPi? Now that you know how to build your `pyproject.toml` file are you ready to upload your package to PyPi? I created a guide to show you how to [upload to PyPi the modern way](https://dev.to/2320sharon/the-modern-way-of-uploading-a-pypi-package-2gpd).
2320sharon
1,199,816
How to use Gitcoin
Support our initiative on Gitcoin today! We are crowdfunding our initiative on Gitcoin! If...
0
2022-09-22T04:44:24
https://dev.to/techbychoice-team/how-to-use-gitcoin-ga2
gitcoin, metamask, web3
## Support our initiative on Gitcoin today! We are crowdfunding our initiative on [Gitcoin](bit.ly/tbc-gitcoin)! If you believe in diversity and inclusion in tech, paid open source is a way to lower the barriers to entry for underrepresented people. With your support we can help more people break into tech! ###Let's cover how to contribute! We understand not everyone knows how to navigate [Gitcoin](gitcoin.co) website so we're going to show you how to do the following things 1. Create a Gitcoin account 2. Contribute to our [Gitcoin grant](bit.ly/tbc-gitcoin) ### Set 1: Create your Gitcoin account Let's get started with creating your account! Feel free to follow along with our step by step video, our the steps below {% embed https://www.youtube.com/embed/ki7oPfpXgkU %} 1. Head over to the [Gitcoin website](https://gitcoin.co/) to make an account. ![Image of Gitcoin website with sign in button highlighted](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnyaxlvgzb9j6qovux5a.png) 2. If you have [Github](www.github.com) account login with your Github account! If you don't have one, that's ok! We will go through the steps to create an account. ![Image of Github Screen to make a Github account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7lhrm4b1x0ymqd0ym0nv.png) 3. To activate your Github account you'll need to enter in the passcode they email you to verify you are the human that owns the email account you signed up with. ![Image of Github screen that ask for passcode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uh46nohid5qc79lyftu4.png) 4. Now the account is created and verified we'll need to give Github access to connect to Gitcoin so click approve! ![Github screen to approve the connection between Github and Gitcoin](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/792b8yt947heukxo4b39.png) 5. We officially have a Gitcoin account! Now let's head over the the [Gitcoin Grants Page](www.gitcoin.co/grants). We see this banner at the top that says **"Maximize your Impact! Your current match is 50.0%"**. We want to turn that 50% to 150% so let's get into that! To get started click the "Verify" button. ![Image of the Gitcoin Grants page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4dhymxts65aufpy7rv0.png) 6. To get that bonus we have to create a [Gitcoin Passport](https://passport.gitcoin.co/). In the Web3 space, we can be whoever we want to be. But when it comes to supporting social good, we need to make sure we are who we say we are. So Gitcoin created "Passports" to connect your crypto wallet to different social accounts to prove you're human. The more accounts you connect to **the higher your trust score goes and the higher your match will be**. Let's go ahead and click **Gitcoin Passport** to connect your crypto wallet to the Gitcoin Passport. ![Image of Gitcoin setting pages that shows your trust score and how to increase it](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6dcda2q80zap7ow5zvep.png) **So what accounts could I connect to?** You can connect to accounts like Twitter, Google, LinkedIn, and more! You can checkout the full list below. ![List of accounts you can connect your passport to](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qil44y3ssqt6c0pk37n1.png) You may be saying, hold up, I don't have a crypto wallet. And don't worry, we'll go over this too! Head over to out [Notion page to learn how you can set up your Meta Mask Wallet](https://www.notion.so/techbychoice/How-to-Make-a-MetaMask-Wallet-320ff4fb429f4ab6a9940660bcb93079) 7. Let's head over to [Gitcoin Passport](https://passport.gitcoin.co/) and connect our wallet. ![Screen showing a user connecting to Meta Mask wallet to the Passport website](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2e1vi0do7tpwupctb5t.png) 8. Now that we're here we'll need to connect to a few additional accounts to make our trust score go up. This screenshot show us connecting to 1 account, but let's aim to connect 3 accounts to make our trust score hit 150%. ![Image of verifying our account through Twitter](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ixo369mcncvjmup5am0h.png) 9. Now let's head back over to the Gitcoin website to reconnect our website to check our trust score with the same wallet we connected our sites to. ![Image of website that's connecting wallet to update passport score](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iri4zmvay5ocfb8ycu9s.png) 10. Our Trust score has gone up and our contributions will be match by 150%! ![Image of trust score going up to 150%](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sewiqilmurgh856ufybr.png) And like that we're done setting up our account! Now let's learn how to support! #### Set 2: Contribute to our [Gitcoin grant](bit.ly/tbc-gitcoin) We created our account and we're ready to support our grant. Feel free to follow along with our step by step video, our the steps below {% embed https://www.youtube.com/watch?v=noMAMhRP2YY %} 1. Let's head over the the [Gitcoin Grants page](www.gitcoin.co/grants) and click the **View All Grants** button. ![Image showing the Gitcoin Grants home screen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3iovehqs17pub0pycpbh.png) 2. In the search bar, we want to type in **Tech by Choice** to find our grant. ![Image showing someone searching for Tech by choice on the Gitcoin grants page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbnyw9axo1iifraz5vtz.png) 3. From the grants home page, we'll need to click on the **Add to Cart** button. ![Image of Tech by Choice grant with user clicking on "add to cart" page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5nuuav8me1r2zkoxsi71.png) 4. Once the item is in the cart, we can go to check out to contribute. - Click on the shopping cart item in the top right corner. - In the dropdown, click on the **Checkout** button. ![Image that shows the shopping cart to go to checkout](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1smw71pbb07r51p04q8p.png) 5. In the checkout page there's a few things I want to point out. - Section 1: In this part of the website we can change the crypto currency that we want to use for this donation. For this example we will keep it at DAI - Section 2: This part shows us how much we're giving to the grant - Section 3: This shows how much the match will be. The match we be calculated based on how many people gave to this account already, & the amount we're giving. (This image shows 0 because we grabbed these screenshots the moment our grant went live. Matches at the time of this article is $31 to every $1) - Section 4: Shows how much you're spending to support the grant in your cart. The **Your Total Contribution** is the impact you're making for this one grant! Once everything looks good let's click **I'm Ready to Checkout**! ![Image that describes the different parts of the checkout for the grants](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e5dbtd38ou7mg3urcub7.png) 6. To complete the checkout we need to approve the transaction from our wallet. You'll get a number of prompts that you'll need to read through and approve. **Make sure you don't close the tab while things are working on the backend!** ![Image that shows what it's like to checkout with a crypto wallet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uidbpgn62owqqeq9y6m0.png) ![Image showing the Meta Mask confirmation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2loo5unca4zzgfxpyr15.png) 7. And just like that you're done! You've supported diversity in Tech. ![Confirmation page that you've submitted a contribution](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i8k3j8dvptdutghah4k8.png) ## Thank you for supporting diversity in tech! Please share this article and our active[ Gitcoin grant: Tech by Choice: Paid Open Source Initiative ](bit.ly/tbc-gitcoin) far and wide to help us make an impact for the Tech by Choice community.
techbychoice-team
1,206,717
Announcing General Availability of BigQuery with Hasura
Introduction We recently announced the general availability of BigQuery in Hasura! Now...
0
2022-09-30T08:05:46
https://hasura.io/blog/bigquery-general-availability-hasura/
database, graphql
--- title: Announcing General Availability of BigQuery with Hasura published: true date: 2022-09-29 10:50:57 UTC tags: database,graphql canonical_url: https://hasura.io/blog/bigquery-general-availability-hasura/ --- ## Introduction ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/bigquery.png) We recently announced the general availability of BigQuery in Hasura! Now you can connect a BigQuery database to your Hasura application to consume data. This article teaches you how to: - configure BigQuery with Hasura - connect BigQuery database to Hasura - set up relationships between BigQuery tables - use Hasura permissions to perform data validation > Supported versions: Hasura GraphQL engine v2.0.0-alpha.1 onwards. ## Configure BigQuery with Hasura To connect a BigQuery database to Hasura, you need a "service account key" file. That file contains the credentials required by Hasura to connect to the database. Navigate to the project's settings, create the `BIGQUERY_SERVICE_ACCOUNT` environment variable and set it to the content of the "service account key" file. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/hasura-env-variable.png) If you need help obtaining your service account key, check [this section](https://hasura.io/docs/latest/databases/bigquery/getting-started/cloud/#connecting-to-a-bigquery-project) in the documentation. ## Connect BigQuery Database to Hasura Navigate to the `Connect Existing Database` page in the project console to set up the database: 1. Choose a name for the database 2. Choose `Big Query` for the "Data Source Drive" 3. Select the `Environment Variable` option 4. Enter the newly created environment variable `BIGQUERY_SERVICE_ACCOUNT` 5. Enter your GCP project id 6. Enter the dataset (or datasets) ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/connect-bigquery-hasura.png) After connecting the database, you should be able to track the tables from the specified dataset. Click on "Track All". ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/database-track-tables.png) The database is now ready! You can perform [GraphQL queries](https://hasura.io/learn/graphql/intro-graphql/graphql-queries/) on your data. For a more comprehensive guide on [getting started with BigQuery on Hasura Cloud](https://hasura.io/docs/latest/databases/bigquery/getting-started/cloud/), check the documentation. Let's test the integration with the following query: ``` query authors { publication_authors { id name } } ``` Running the query returns a list of all authors from the database, as shown in the figure below. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/hasura-bigquery-query.png) You can query authors and articles individually, but there is no relationship between them. For example, you cannot retrieve all authors and their articles. ## Set up relationships between BigQuery tables Nested object queries refer to fetching data for a type and data from a nested or related type. To make nested object queries, you need to set up relationships between the two tables - `authors` and `articles`. ### Create array relationship Navigate to the `Relationships` page in the `authors` table and click on the "Configure" button. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/authors-relationship-hasura.png) That opens a new section where you can configure the relationship. Configure it as follows: 1. Choose `Array Relationship` (one-to-many) for the "Relationship Type" field 2. Name your relationship - e.g. `articles` 3. Leave the "Reference Schema" field as it is 4. Choose `articles` for the "Reference Table" field 5. Select `id` for the "From" field and `author_id` for the "To" field 6. Save the relationship ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/configure-relationship-hasura.png) Let's test the relationship by running the following query: ``` query author_articles { publication_authors { id name articles { id title published_on body } } } ``` Running the query returns all the authors and their articles, as illustrated in the image below. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/authors-articles-query.png) ### Create object relationship The next step involves creating an object relationship between the `articles` and `authors` tables. Configure the relationship as follows: 1. Choose `Object Relationship` (one-to-one) for the "Relationship Type" field 2. Name your relationship - e.g. `authors` 3. Leave the "Reference Schema" field as it is 4. Choose `authors` for the "Reference Table" field 5. Select `author_id` for the "From" field and `id` for the "To" field 6. Save the relationship ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/articles-author-relationship.png) Let's test the relationship by fetching the articles and their authors. ``` query articles_author { publication_articles { id title published_on body author { id name } } } ``` The image shows the list of articles and the id and name of each article's author. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/articles-author-query.png) By this point, you set both object and array relationships. These relationships enabled you to perform nested queries. Check the documentation on [BigQuery: nested object queries](https://hasura.io/docs/latest/queries/bigquery/nested-object-queries/) for more information. ## Data validation with BigQuery Even though BigQuery does not support constraints natively, you can use Hasura permissions to perform data validation. With this example application, let's consider the following scenarios: - authors should only be able to access their details - authors should only be able to access their articles Navigate to the "Permissions" tab in the "authors" table and add the `author` role. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/hasura-permissions-ii.png) Click on the "X" icon to open the configuration section and add the following custom check: ``` { "id": { "_eq": "X-Hasura-User-Id" } } ``` Then toggle all the columns and save the permissions. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/authors-table-author-permission.png) Before performing any queries, you need to set the `X-Hasura-Role` and `X-Hasura-User-Id` headers to the `author` role and the author's id, respectively. Once the headers are in place, you can fetch the author's details. ``` query getAuthor { publication_authors { id name } } ``` The `x-hasura-user-id` header is set to "1", meaning the query returns the author's details with the `user_id` of "1". ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/getauthor-query.png) Similarly, configure the `author` role for the "articles" table as follows: ``` { "author_id": { "_eq": "X-Hasura-User-Id" } } ``` Then toggle all the columns and save the permissions. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/articles-author-permissions.png) Let's test the permissions by running the following query in the GraphiQL editor: ``` query getArticles { publication_articles { id title published_on body author { id name } } } ``` It should return the author's articles specified in the `x-hasura-user-id` header. The image illustrates that it works as expected. ![Announcing General Availability of BigQuery with Hasura](https://hasura.io/blog/content/images/2022/09/getarticles-query.png) If you remove the `x-hasura-user-id` header, Hasura returns an empty array. This is how you perform data validations with the Hasura permission system. If you want to read more, check the documentation on [BigQuery: Data Validation](https://hasura.io/docs/latest/schema/bigquery/data-validations/). ## Next steps There is also a video that covers the topic. You can watch it here: <!--kg-card-end: markdown--> {% youtube u_tfDKsarE4 %} <!--kg-card-begin: markdown--> We would love to hear about your use cases with BigQuery. Let us know in the comments! > Note: Check the documentation for information on the [features supported](https://hasura.io/docs/latest/databases/index/#feature-support). <!--kg-card-end: markdown-->
hasurahq_staff
1,199,925
Hello world
this is actually my second blog, im using medium right but i might be migrating to Dev.to is fun...
0
2022-09-22T07:23:12
https://dev.to/carlosjuniordev/hello-world-1bbj
webdev, web3
this is actually my second blog, im using medium right but i might be migrating to Dev.to is fun tho! this is carlos so yea, this is kinda wierd but this medium blog is made only for me or maybe a chance of anyone finding this blog interesting or whatever. i decided to create this today to only check what i learn on programming right now, i had a couple of ideas a few weeks ago about pointers i tought i could never learn but know seems pretty easy to me. and i failed to join 42Lisbon in August of 2022 so i can remember myself i actually really bad at programming and i must learn a lot if i rly wanna get a job, also wanna practice my english writing skills since im brazilian. i would say my goal is to be blockchain developer and work full-time with blockchain developing contracts or doing audits still very new but im currently working on there. well if anyone finds this medium blog here is my socials i guess. Linked-in https://www.linkedin.com/in/carlosjuniordev/ github https://github.com/CarlosJunioor
carlosjuniordev
1,200,573
High CPU and zombie threads on Amazon Aurora Mysql 5.6
Recently noticed some high avg CPU utilization on an Amazon Aurora Mysql Databases running Mysql 5.6...
0
2022-09-22T21:12:38
https://dev.to/aws-builders/high-cpu-and-zombie-threads-on-amazon-aurora-mysql-56-1mbj
aws, rds, auroramysql
Recently noticed some high avg CPU utilization on an Amazon Aurora Mysql Databases running `Mysql 5.6 (oscar:5.6.mysql_aurora.1.22.2)`. Something that was noticed that I thought was interesting to share were zombie threads or threads that were running for a long period of time and never finished as well as threads that were not possible to be killed. These were simple DDL statements that were triggered by a little reporting engine that created a bunch of temporary tables to gather some aggregations. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7veyzyv4inf6h1fm1bno.png) A quick look up on the process list tells us that there are some DDL statements stuck for 4 days as shown below: ```txt mysql> show full processlist; | Id | User | Host | db | Command | Time | State | Info | 77569519 | app | x.x.x.x:yyyyy | test | Query | 404949 | init | DROP TEMPORARY TABLE IF EXISTS temp1 ::::: ``` TRX status for the same: ``` mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_TRX where trx_mysql_thread_id = 77569519 \G *************************** 1. row *************************** trx_id: 124803462108 trx_state: RUNNING trx_started: 2022-09-17 21:01:45 trx_requested_lock_id: NULL trx_wait_started: NULL trx_weight: 33614 trx_mysql_thread_id: 77569519 trx_query: DROP TEMPORARY TABLE IF EXISTS temp1 trx_operation_state: NULL trx_tables_in_use: 0 trx_tables_locked: 0 trx_lock_structs: 14 trx_lock_memory_bytes: 376 trx_rows_locked: 0 trx_rows_modified: 33600 trx_concurrency_tickets: 0 trx_isolation_level: READ COMMITTED trx_unique_checks: 1 trx_foreign_key_checks: 1 trx_last_foreign_key_error: NULL trx_adaptive_hash_latched: 0 trx_adaptive_hash_timeout: 0 trx_is_read_only: 0 trx_autocommit_non_locking: 0 1 row in set (0.00 sec) ``` The initial suspect was disk issues causing these long running queries but was ruled out as metrics seemed ok and the database appeared to have plenty of Local Storage to deal with temporary tables. The next attempt to recover from these were to kill the long query to free up the CPU cycles. ```txt mysql> call mysql.rds_kill(77569519); Query OK, 0 rows affected (0.00 sec) mysql> call mysql.rds_kill_query(77569519); Query OK, 0 rows affected (0.00 sec) ``` No luck despite attempts to kill the query and even the connection. While `rds_kill_query` did not change anything `rds_kill` did change the command status from `Query` to `killed`. Neither of these were helpful in this case and the `trx_state` continued to be `RUNNING`. ```txt mysql> show full processlist; | Id | User | Host | db | Command | Time | State | Info | 77569519 | app | x.x.x.x:yyyyy | test | Killed | 422937 | init | DROP TEMPORARY TABLE IF EXISTS temp1 ::::: ``` Next up was to seek some help from AWS Support and thus gathered the below recommendations: 1. Reboot the Amazon Aurora Cluster (or trigger a failover). 2. Upgrade From Amazon Aurora `1.x` to Amazon Aurora to `2.x`. Particularly `2.07.8` which has some fixes from the community edition for stability around temporary tables. Note that Aurora `2.x` would mean an upgrade to Mysql `5.7.x` from a compatibility standpoint. Hope this helps!
krisiye
1,200,810
The joy of validating with Joi
Validation is a crucial step. But one look at the lines of IFs spawning from endless checks could...
0
2022-09-23T03:58:13
https://medium.com/sliit-foss/the-joy-of-validating-with-joi-b8c87991975b
beginners, javascript, validation, joi
Validation is a crucial step. But one look at the lines of IFs spawning from endless checks could send us over to NPM, hoping to find the perfect library. And one of the validation libraries you would find is Joi. And like its name, it’s a joy to use. With Joi, you can > Describe your data using a simple, intuitive and readable language. So to ensure some user input contains a name and a valid email, it’s simply ```js const schema = Joi.object({ name: Joi.string() .min(3) .max(30) .required(), email: Joi.string() .email({ minDomainSegments: 2, tlds: { allow: ['com', 'net'] } }) }) ``` This code block validates an input to have a `name` property with a number of characters between 3 and 30, and an `email` with two domain parts (sample.com) and a top level domain (TLD) of either .com or .net. But to get a better view of what Joi has to offer, let’s see how we could build a simple form that validates a user’s input according to a schema. ### A Simple Form Validation Installing Joi is as easy as running: ``` npm i joi ``` After importing Joi at the top of your file with, ```js const Joi = require("joi"); ``` Joi can be used by first constructing a schema, then validating a value against the constructed schema. For this example let’s assume that we already have four text fields taking in a user’s name and email and asks to enter a password twice. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybyoq3fk52vzll22mqya.png) <figcaption align = "center">A simple form built with Material UI</figcaption> Now to create the schema that Joi will validate against. Since a schema is designed to resemble the object we expect as an input, the schema for our four property form data object will look like this: ```js const objectSchema = { name: Joi.string().alphanum().min(3).max(30).required(), email: Joi.string().email({ minDomainSegments: 2, tlds: { allow: ["com", "net"] }, }), password: Joi.string() .pattern(new RegExp("^[a-zA-Z0-9]{3,30}$")) .required() .messages({ "string.pattern.base": `Password should be between 3 to 30 characters and contain letters or numbers only`, "string.empty": `Password cannot be empty`, "any.required": `Password is required`, }), repeatPassword: Joi.valid(userData.password).messages({ "any.only": "The two passwords do not match", "any.required": "Please re-enter the password", }), }; ``` According to this schema: - `name` is validated to be: - an alphanumeric string - between 3 to 30 characters - a required field - `email` is checked to have : - two domain parts (sample.com) - a top level domain (TLD) of either .com or .net ### Custom Error Messages The fields `name` and `email` use default error messages: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k31ejq6yo5dh2161monz.png) But the fields `password` and `repeatPassword` use `.messages()` to return a custom error message for a set of specific error types. For example, the custom error messages for the `password` field are: ```js .messages({ “string.pattern.base”: `Password should be between 3 to 30 characters and contain letters or numbers only`, “string.empty”: `Password cannot be empty`, “any.required”: `Password is required`, }), ``` The first one is a custom message for an error of type `string.pattern.base`, if the entered value does not match the RegExp string (since the `password` field is validated with a RegExp). Likewise, if a an error of type `string.empty` is returned (the field is left blank) the custom error message “Password cannot be empty” is shown instead of the default. Moving on to `repeatPassword`, `Joi.valid()` makes sure that the only valid value allowed for the `repeatPassword` field is whatever the user data is for the `password` field. The custom error message shown for an `any.only` error type is shown when the entered value does not match the provided allowed value, which is `userData.password` in this case. The full list of possible errors in Joi can be viewed here: https://github.com/hapijs/joi/blob/master/API.md#list-of-errors ### Validating the Form Field on an onChange event In this example, each form field will have its own error message. So to make updating the state of each error message cleaner, an object was created to hold the values of error messages for all form fields with a useReducer hook to manage its state. ```js //INITIAL VALUE OF ALL ERROR MESSAGES IN THE FORM //Each property denotes an error message for each form field const initialFormErrorState = { nameError: “”, emailError: “”, pwdError: “”, rpwdError: “”, }; const reducer = (state, action) => { return { …state, [action.name]: action.value, }; }; const [state, dispatch] = useReducer(reducer,initialFormErrorState); ``` The `reducer` function returns an updated state object according to the action passed in, in this case the name of the error message passed in. For a detailed explanation on the useReducer hook with an example to try out, feel free to check out my article on using the useReducer hook in forms. {% embed https://dev.to/methmi/forms-with-react-hooks-ig0 %} Moving on to handling the onChange events of the form fields, a function can be created to take in the entered value and the name of the error message property that should show the error message (to be used by the `dispatch` function of the useReducer hook). ```js const handleChange = (e, errorFieldName) => { setUserData((currentData) => { return { ...currentData, [e.target.id]: e.target.value, }; }); const propertySchema = Joi.object({ [e.target.id]: objectSchema[e.target.id], }); const result = propertySchema.validate({ [e.target.id]: e.target.value }); result.error == null ? dispatch({ name: errorFieldName, value: "", }) : dispatch({ name: errorFieldName, value: result.error.details[0].message, }); }; ``` Line 2 to line 7 updates the state of the `userData` object with the form field’s input. For simplicity, each form form field’s id is named its corresponding property on the `userData` object. `propertySchema` on line 8 is the object that holds the schema of the form field that’s calling the `handleChange` function. The object `objectSchema` contained properties that were named after each form fields id, therefore, to call a fields respective schema and to convert it into a Joi object, `Joi.object({[e.target.id] :objectSchema[e.target.id],})` is used and the resulting schema object is stored in `propertySchema`. Next, the input data is converted to an object and validated against the schema in `propertySchema` with `.validate()`. This returns an object with a property called `error`, this property contains useful values like the error type (useful when creating custom messages) and the error message. But, if the `error` property is not present in `result`, a validation error has not occurred, which is what we are checking in line 13. If a `error` is present, the dispatch function is invoked with the name of the form error object’s field that should be updated in `name`, and the error message that it should be updated to in `value`. This will make more sense when we look at how `handleChange` is called in a form field. Given below is how the form field ‘Name’ calls `handleChange`. ```js <TextField //TextField component properties ... onChange={(value) => handleChange(value, “nameError”)} value={userData.name} /> ``` `handleChange` accepts the value of the field as the first parameter and then the name of the respective error object’s field that the `dispatch` function in `handleChange` is supposed to update, `nameError`. The object, `initialFormErrorState` had one property for each form field’s error message. In this case, any validation error in the ‘Name’ field will change the `nameError` property of `initialFormErrorState` which will in turn be displayed in the respective alert box under the form field. Here’s a look at the finished form: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5yu2kmbc7vydotsh6lar.gif) Hope this simple example helped show how joyful validation with Joi can be. 😊 _**Till next time, happy coding!**_ --- _Happy emoji vector created by freepik - www.freepik.com_
methmi
1,200,836
Mac's Screen Sharing with Multiple Monitors 🖥🖥 ▶️🖥🖥
Intro Although MacOS offers an official remote desktop App, "Screen Sharing," it doesn't...
0
2022-09-23T05:29:15
https://dev.to/spookie/macs-screen-sharing-from-multiple-monitors-to-multiple-monitors--2bl2
mac
## Intro Although MacOS offers an official remote desktop App, "Screen Sharing," it doesn't support a 'multiple monitors to multiple monitors' remote access. Opening 2 sessions, one corresponding to 1st monitor, the other to 2nd would be an alternative, but actually Screen Sharing App doesn't support opening multiple sessions from within the app for the same PC, same user. So people has struggled to prepare multiple users [[App Community : "How to screen share with multiple monitors on both computers"]](https://discussions.apple.com/thread/251160403), but it's hastle, at least for me. I finally managed to do it, without the bothering work like switching the user, and summarize how to do that in the following! Glad if someone finds it helpful:) ## Answer The answer was a quite simple, if you call the executable of the App, then you can open 2 sessions, for the same destination. ### Search App's executable In MacOS (I'm using Monterey), any App's executable is often located at `${APP_DIRECTORY}/Contents/MacOS/${APPNAME}.` In my system, Screen Sharing is in `/System/Library/CoreServices/Applications/Screen\ Sharing.app`, so calling the following command in Terminal results in 2 sessions of remote desktop. ```bash # open for 1st monitor /System/Library/CoreServices/Applications/Screen\ Sharing.app/Contents/MacOS/Screen\ Sharing # call the same exe. for 2nd monitor /System/Library/CoreServices/Applications/Screen\ Sharing.app/Contents/MacOS/Screen\ Sharing ``` ### For easy calling Although just calling the above command solves the problem, it is still troublesome to memorize the path or hit this long path, so I prefer to set the alias for the command. All you need to do is to open the `~/.bashrc`, add the following line somewhere in that file, and save it. ```bash # User specific aliases and functions alias remote_desktop="/System/Library/CoreServices/Applications/Screen\ Sharing.app/Contents/MacOS/Screen\ Sharing" ## ... other settings... ``` then run ```bash source ~/.bashrc ``` to reflect the change in `~/.bashrc`. Since `~/.bashrc` is read and reflected when bash starts, the above is needed to force bash to read the configuration in `~/.bashrc` again. Then after that, everytime you hit ```bash remote_desktop ``` in Terminal, the new Screen Sharing window will open up. I'm connecting from 2 monitors (local) to 2 monitors (remote) without any problem. ## Summary Calling the executable itself enables us to open the multiple sessions almost for any App, for example like Blender. If suddenly the above solution stopped operating after a major update, then I suspect the directory structure has changed. You may want check if the path really exists, and search the new executable path. Anyway, hope multiple monitors to multiple monitor remote desktop will be officially supported in the future.
spookie
1,200,935
Kubernetes volumes upside-down with Discoblocks - #2
This blog post is the second part of my series about Discoblocks. If you haven't read the previous...
0
2022-09-23T07:31:08
https://dev.to/mhmxs/kubernetes-volumes-upside-down-with-discoblocks-2-2182
kubernetes, devops, opensource, cloud
This blog post is the second part of my series about Discoblocks. If you haven't read the [previous](https://dev.to/mhmxs/kubernetes-volumes-upside-down-with-discoblocks-23kb) episode, please do it before you continue. So Discoblocks is one of the open-source projects I'm working on, and our new pre-release build brings cool features I would like to write about. # v0.0.5 (aka Ibiza Disco) has been released Release notes: https://github.com/ondat/discoblocks/releases/tag/v0.0.5 - WebAssembly support for CSI driver integration - Ondat CSI driver integration - Horizontal autoscaling of volumes ## WebAssembly support for CSI driver integration In the new build of Discoblocks we have replaced in-tree CSI driver integration with WASI modules. If there is a missing driver for your use case, just implement a small interface, compile your driver to WASI module and mount it into the container (sub-directory at `/drivers`). Discoblocks starts using it once you enabled the new driver in configuration. ### Here is a simple example ``` package main import ( "fmt" "os" "github.com/valyala/fastjson" ) func main() {} //export IsStorageClassValid func IsStorageClassValid() { json := []byte(os.Getenv("STORAGE_CLASS_JSON")) if !fastjson.Exists(json, "allowVolumeExpansion") || !fastjson.GetBool(json, "allowVolumeExpansion") { fmt.Fprint(os.Stderr, "only allowVolumeExpansion true is supported") fmt.Fprint(os.Stdout, false) return } fmt.Fprint(os.Stdout, true) } //export GetPVCStub func GetPVCStub() { fmt.Fprintf(os.Stdout, `{ "apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": { "name": "%s", "namespace": "%s" }, "spec": { "storageClassName": "%s" } }`, os.Getenv("PVC_NAME"), os.Getenv("PVC_NAMESACE"), os.Getenv("STORAGE_CLASS_NAME")) } //export GetCSIDriverNamespace func GetCSIDriverNamespace() { fmt.Fprint(os.Stdout, "storageos") } //export GetCSIDriverPodLabels func GetCSIDriverPodLabels() { fmt.Fprint(os.Stdout, `{ "app": "storageos", "app.kubernetes.io/component": "csi" }`) } //export GetMountCommand func GetMountCommand() { fmt.Fprint(os.Stdout, `DEV=$(chroot /host ls /var/lib/storageos/volumes/ -Atr | tail -1) && chroot /host nsenter --target 1 --mount mkdir -p /var/lib/kubelet/plugins/kubernetes.io/csi/pv/${PVC_NAME} && chroot /host nsenter --target 1 --mount mount /var/lib/storageos/volumes/${DEV} /var/lib/kubelet/plugins/kubernetes.io/csi/pv/${PVC_NAME} && DEV_MAJOR=$(chroot /host nsenter --target 1 --mount cat /proc/self/mountinfo | grep ${DEV} | awk '{print $3}' | awk '{split($0,a,":"); print a[1]}') && DEV_MINOR=$(chroot /host nsenter --target 1 --mount cat /proc/self/mountinfo | grep ${DEV} | awk '{print $3}' | awk '{split($0,a,":"); print a[2]}') && for CONTAINER_ID in ${CONTAINER_IDS}; do PID=$(docker inspect -f '{{.State.Pid}}' ${CONTAINER_ID} || crictl inspect --output go-template --template '{{.info.pid}}' ${CONTAINER_ID}) && chroot /host nsenter --target ${PID} --mount mkdir -p ${DEV} ${MOUNT_POINT} && chroot /host nsenter --target ${PID} --mount mknod ${DEV}/mount b ${DEV_MAJOR} ${DEV_MINOR} && chroot /host nsenter --target ${PID} --mount mount ${DEV}/mount ${MOUNT_POINT} done`) } //export GetResizeCommand func GetResizeCommand() {} //export WaitForVolumeAttachmentMeta func WaitForVolumeAttachmentMeta() {} ``` That's all you need to bring your own driver. ## Ondat CSI driver integration As you saw in the previous example, we have a driver for Ondat (former Storageos) next to AWS EBS CSI support. This driver is for demo purposes only - so please don't use it in production -, but should be a great choice for testing the system. All you need to do is execute the following commands: ``` kind create cluster --image=storageos/kind-node:v1.24.2 kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml kubectl storageos install --include-etcd --etcd-replicas 1 --stos-version v2.9.0-beta.1 kubectl apply -f https://github.com/ondat/discoblocks/releases/download/v0.0.5/discoblocks_v0.0.5.yaml ``` Once provision has finished, you should create your first workload: ``` kubectl apply -f https://github.com/ondat/discoblocks/raw/7b72c8d87aa5d87a801e1b2e11fa98389f70f485/config/samples/discoblocks.ondat.io_v1_diskconfig-csi.storageos.com.yaml kubectl apply -f https://github.com/ondat/discoblocks/releases/download/v0.0.5/core_v1_pod.yaml ``` Test end result ``` kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- df -h | grep discoblocks/sample ``` > 973.4M 24.0K 906.2M 0% /media/discoblocks/sample-0 #### It is time to register cluster at our [Portal ](https://portal.ondat.io/) to enjoy your FREE TIER! ## Horizontal autoscaling of volumes One of the most exciting features is horizontal autoscaling. In the previous version, only vertical autoscaling was implemented. Discoblocks actively monitors the created volumes. Once the volume hits the threshold, Discoblocks increases the size of the volume. But in the new version, if the volume is not scalable vertically (all disk has an end capacity), Discoblocks creates a new disk and mounts it into the running pod. #### Yes, your read it right. Discoblocks ... 1. creates a new PersistentVolumeClaim 1. set the owner of the new PVC to patient zero (first PVC created for pod), this should be handy when you have to delete PVCs 1. creates a VolumeAttachment to bind volume to the target node 1. spins up a management job to format, mount the volume (the weird `GetMountCommand` in the driver) Generate data: ``` kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- dd if=/dev/zero of=/media/discoblocks/sample-0/data count=1000000 sleep 30 kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- dd if=/dev/zero of=/media/discoblocks/sample-0/data count=2000000 sleep 60 kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- dd if=/dev/zero of=/media/discoblocks/sample-1/data count=1000000 sleep 30 kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- dd if=/dev/zero of=/media/discoblocks/sample-1/data count=2000000 sleep 60 ``` Test end result If :crossed_fingers: everything has worked perfectly, you would see all 3 volumes mounted into the pod. :tada: ``` kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- df -h | grep discoblocks/sample ``` > 1.9G 976.6M 896.1M 52% /media/discoblocks/sample-0 > 1.9G 976.6M 896.1M 52% /media/discoblocks/sample-1 > 973.4M 24.0K 906.2M 0% /media/discoblocks/sample-2 #### Please breath, slowly in, ..., slowly out across the nose and don't hesitate to give it a try :D I let you figure out how awesome are these features. Please feel free to share your ideas, join the development, or simply enjoy the product.
mhmxs
1,201,183
The Adventures of Blink #9: The Secret Shortcut to Create Culture
A Tale of Two Cultures Culture 1 A division of the company decided after...
0
2022-09-25T19:46:38
https://dev.to/linkbenjamin/the-adventures-of-blink-9-the-secret-shortcut-to-create-culture-101c
culture, leadership, devrel, community
## A Tale of Two Cultures ### Culture 1 A division of the company decided after reading some articles and hearing some speakers that it needed to invest in "cultural changes". It commissioned an (internal to the one division) cross-functional team to establish the principles of the culture. They put together a list of topics... the team who generated the list didn't fully agree on them, but worked really hard to come to a compromise that everyone finally signed off on. The bullet points were somewhat reflective of actual behaviors and somewhat aspirational. An announcement was made to great fanfare with promotional materials and awards banquets and little trophies to be handed out to the winners each year when people were nominated by their peers (and subsequently approved by management) to be great examples of one bullet point or another. A few people really bought into the idea and used the principles as framing for discussions but by and large, the list was just a pretty frame on the wall. ### Culture 2 The company realized as it was growing that it needed to codify the behaviors that the small, original, close-knit team had most appreciated about each other. They assembled a team of folks from across the whole company who helped narrow the focus and condense the message to a short memorable list. Their list described actual behaviors that they wanted to continue doing as new people joined rather than ideals to which they aspired. An announcement went out that there was a new required training course to understand the new list of principles. It was presented by the CEO, and recorded for replay as a required viewing by every new employee. Further, the company allocated budget to build a way for employees to recognize each other for things that exemplify the items on the list. It sends the person a small gift and a short writeup that gets shared with the management chain of the person who's being recognized to explain why they were shouting them out. The hiring team implemented the list of principles in their hiring practices and specifically framed the discussions with candidates around these topics. People who led projects or teams used the principles to guide discussions about decisions to be made. All of the company's strategy and priorities and focus began to be done through the lens of their list of principles, because the principles described the behaviors that already existed to some extent and were desired to continue. ### Which company created "culture"? Sorry, that was a trick question. The answer is BOTH. Because "culture" isn't some mystical-energy frippery like the Force. "Culture" (note the quotation marks, I'll dig into this later) is created with literally every decision that's made, with every word that's spoken, with every action taken. It's a summation of _who we are_, whether that's on a micro individual scale or across the whole of the company. To put a finer point on it: The first company above created a culture of lip service to the things that matter. They _said_ they supported some ideals, and even went as far as enumerating them... but then that's where it stopped. The follow-through was weak and ill-coordinated. The second company above strives to live their values _authentically_. They incorporate intentional discussion into _everything they do_ to ensure that decisions being made align to those values that they've claimed to hold. ## So why do you keep putting "culture" in quotes? The epiphany I'd like to share with you is that we (even I) have been going about this "culture" thing _all wrong_. Yes, the "culture" of our company needs to grow and evolve and adapt. Yes, it's critical that we learn these lessons quickly, before we're disrupted, and before we ruin our relationship with everyone who works here. But "culture" _can't be separated_ from other aspects of the business. It isn't compartmentalizable like that. And THAT's the part where we're falling down... we're trying to make it some sort of "other" entity that's not related to _what we do day to day_. ## The Secret I've sat in many meetings bemoaning the idea that "we can't fix that because it's a cultural problem". Or stating the cop-out position that "The Company can't control the culture". (Note: I call this a cop-out because of poor word choice: no, they don't control it, but the implication is that they have no responsibility for culture just because they don't control every aspect of it - they CAN and DO change and influence it!!!) Here's the secret to changing "culture". Everything you do and say, every way you act... all that stuff affects "culture" constantly. The problem with changing culture is that _we naturally resist **change**_. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwsxqn0rdyzg0cvrnn2e.png) The Secret to culture change, then, is not in the **_"culture"_** as much as it is in the **_"change"_**. ## The Parable of the Change Agent Once upon a time, a man went fishing. He had a pole with a string tied to the end, and a small metal hook. He spent the whole day with his hook in the water and caught one fish. He went home, and read a book written by a professional fisherman. He went back to the pond with his pole, and dangled the hook in the water all day again. He still caught only one fish. He took a few months to study the biological features of the fish, learned their likes and dislikes, learned how to find out where they'd be at certain times of day, and became an absolute authority on fish. He went back to the pond with his pole, and dangled the hook in the water all day again. He still caught only one fish. Y'all, this is **exactly** the approach that we take with cultural change. We read about it, study it, preach about it, write blogs on it _(yes I'm preaching to myself here too!)_... and then proceed to do the same old thing and wonder why we didn't get better results. For example: we decide we're going to "do an Agile Transformation". To paraphrase how Jez Humble famously put it, we pretty much change nothing except having our meetings standing up... and then we wonder why we haven't seen the improvements we were promised. _Why is this surprising???_ ## The Secret to Changing Culture is Not to Change the Culture A common pattern in development teams goes something like this: - We deployed something. - It caused a major outage. - We respond by requiring a signoff to proceed - someone to accept the risk. - We deploy again, this time waiting for the approval. - Something totally different breaks and causes a major outage. - We respond by adding in additional approvals; someone different was unaware of the change and that's what caused the outage. - We deploy again, this time waiting for 2 approvals. - Something totally different breaks and causes a major outage... Our goal here is admirable... we want to be able to deploy without major outages. Our solution to the problem **_isn't helping us, though_**. Just like the fisherman of the parable, we just keep on going back to the pond and dangling the hook in there, obliviously. Here's where it gets really weird though - all along the way, our team is picking up cultural cues! - "We have to wait for ____'s approval because we had an outage last week." - "There's a new change request form we have to fill out and submit by Tuesday in order to be allowed to deploy this week." - "We can't start the deployment unless Susan's team is on the call, they're the ones who are authorized to press the start button." We think we're making things safer. But what we're really doing is _teaching the team that we don't value their ability to get things done in a timely fashion_. Every change makes it harder and slower to deliver... and our team is learning from this firsthand! ## What do we do now? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yiic87cjhtptmy3o729z.png) Let's envision culture not as something that's being created, like a building. You see, you can pause the construction of a building and walk away to work on other things, and then come back to it. Let's think of it as something that's being shaped and molded, like a lump of clay. The clay is on the wheel, and once it comes off it won't be fit to shape anymore. There's no pausing for a "special pottery initiative" where we'll fix up the pot later; we have this one chance to shape it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/msa6t8srtks65vn4aezl.png) What will we do?
linkbenjamin
1,201,273
Answer: how to assign hexadecimal color code in primarySwatch in flutter?
answer re: how to assign hexadecimal color...
0
2022-09-23T16:40:27
https://dev.to/saadmansoor7777/answer-how-to-assign-hexadecimal-color-code-in-primaryswatch-in-flutter-240c
{% stackoverflow 62433225 %}
saadmansoor7777
1,201,455
Vault Enable Userpass Auth Method
NOTE: This post assumes that you have all ready set up a vault server:...
0
2022-09-23T19:56:58
https://dev.to/frederickollinger/vault-enable-userpass-auth-method-1d5p
NOTE: This post assumes that you have all ready set up a vault server: https://dev.to/frederickollinger/production-hashicorp-vault-minimal-configuration-485a ## What is an Auth Method? An auth method is a method to valid requests from clients. It provides authentication, that is it checks to see that you are who you say you are. It does not handle authorization which tells you what resources you may or may not do or access. It also assumes that one is logged in via the root token. ## Where Would You Use Userpass Auth Method? Userpass allows you to create user accounts which map to a real human. Each user can authenticate separately using a password. ## What is a Policy? A policy allows one to control what a particular Role can do with vault, what secrets to change, access, etc. ## Enabling Userpass As a one time operation, one needs to enable the userpass auth method as it is off in new Vault deployments by default. ```sh vault auth enable userpass ``` ## Create a New User ```sh vault write auth/userpass/users/bondj password=doubleohseven policies=default ``` ## List All Users ```sh vault list auth/userpass/users ``` ## Login To userpass ```sh vault login -method=userpass username=bondj password=doubleohseven ``` ## References 1. Official Documentation https://www.vaultproject.io/docs/auth/userpass
frederickollinger
1,201,502
Grial UI Kit 4 is here! Beautiful UI for .NET MAUI apps
This post was written by the UXDivers Team and originally was posted on the Grial UI Kit Blog Yes!...
0
2022-09-23T21:55:28
https://dev.to/grialkit/grial-ui-kit-4-is-here-k71
dotnet, dotnetmaui, xamarinforms, xamarin
> This post was written by the [UXDivers Team](https://dev.to/grialkit) and originally was posted on the [Grial UI Kit Blog](https://dev.to/grialkit/grial-ui-kit-4-is-here-k71) Yes! Grial UI Kit 4 is here and it includes .NET MAUI support. For those of you not familiar with the product, Grial UI Kit has been around for quite some time. From the early days of Xamarin.Forms to this new .NET MAUI era, our team has been trying to push the boundaries of what’s possible with the platforms in terms of user experience and user interface design. Grial UI Kit provides .NET developers all the resources they need to build beautiful apps faster than ever. Today, after 7 years in the market and countless Xamarin.Forms apps built, we are happy to announce that Grial UI Kit 4 is ready. ## So, what's Grial UI Kit? Easy, it’s the most comprehensive library of UI/UX resources available for .NET MAUI and Xamarin.Forms. It’s a world of infinite resources that will save you and your team countless hours of development time. Plus, it’s the best way to bootstrap your .NET MAUI project. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lpdz69881uqe9uq0ape2.PNG) ## Highlights * Over 100 fully customizable pages ready to be included on your .NET MAUI or Xamarin.Forms project. * Use our templates out of the box or recombine different parts of different pages to create your own. Add, remove, tweak, whatever your needs are, Grial UI Kit can adjust to them. * Over 30 UI controls available. Floating Menu, Drawer Control, Data Grid, Video Player, Maps, Tab Control, Pop-Ups, Carrousel View, Checkbox, Radio Buttons, Selectable Wrap Panel, Rating Control, and so much more. * Super fast project kickstart. Simply create an account in [Grial Web Admin](https://admin.grialkit.com/secure/grial/front/signup), select the pages you want to include in your project, select a base theme or pick an accent color for your app. Change your namespaces, and voilá, get a complete, MVVM framework agnostic, well structured, and fully customizable .NET MAUI or Xamarin.Forms solution. _Btw, the great Gerald Versluis coined the term "App in a box" and we loved it!_ The Grial UI Kit demo app is currently available for iOS and Android phones and tablets. The demo app was designed to showcase not only what’s included in Grial UI Kit, but also as a tool for those .NET devs out there trying to solve a UI puzzle. Search for patterns, controls, icons, you name it, whatever you need to include in your .NET MAUI or Xamarin.Forms you’ll find it in Grial. Download Grial UI Kit demo app for iOS: https://apps.apple.com/app/grial-uikit/id1099501310 Download Grial uI Kit demo app for Android: https://play.google.com/store/apps/details?id=uxdivers.grial ## Key things about this new Grial UI Kit version * **.NET MAUI support**. This is of course the first highlight of this new version. * **Fully redesigned**. The kit has been completely re-designed to be even more flexible and sexy. * **Grial Web Admin**. Our web admin has been redesigned and the Grial UI Kit content catalog has been organized in categories. You can now search for a page, control, or whatever you need to include in your project. * **Redesigned demo app**. Don’t let us tell you about how beautiful our demo app is, get it on the stores and get inspired for your next .NET MAUI or Xamarin.Forms project. ## Useful links Visit the Grial UI Kit website to know everything about it. https://grialkit.com/ Get started with Grial UI Kit 4: https://admin.grialkit.com/ Get the demo apps here. https://grialkit.com/download-demo-app Grial UI Kit blog. Subscribe and stay tuned. https://grialkit.com/blog Thanks for reading! Get Grial UI Kit, get inspired.
dotnetblogger
1,201,775
Forex API for Real-Time Currency Rates
Forex API for Real-Time Currency Rates A Forex API is an application programming interface...
0
2022-09-24T08:00:27
https://dev.to/marketingfixer/forex-api-for-real-time-currency-rates-42ji
api, time, currency, rate
## Forex API for Real-Time Currency Rates A Forex API is an application programming interface (API) that provides real-time currency rates. A Forex API can be used to access live and historical data, as well as to place trades. By using a Forex API, you can get up-to-the-minute currency exchange rates for all major currencies. This can be extremely helpful if you're traveling abroad and need to know the current rate for exchanging your money. A Forex API can also be used by businesses to keep track of their international payments and receipts.You can also use an API to back-test trading strategies before putting them into practice As a Forex trader, it's important to have access to real-time currency rates. There are many different [free Forex APIs](https://fixer.io/) available, each with its own set of features and benefits. Some platforms offer a more robust suite of tools for tracking currency movements, while others may be more focused on providing up-to-the-minute exchange rates. It's important to find the right API for your needs in order to make the most informed decisions when trading FOREX. ## Fixer: The Real Time Exchange Currency Converter API. If you're someone who frequently travels or deals with foreign currency, then you know how difficult it can be to keep track of the ever-changing exchange rates. Even if you don't travel often, it's still important to be aware of the current exchange rates if you're planning to send money abroad. If you need to get real-time currency exchange rates, the Fixer API is the best free currency converter API option. With real-time exchange rates, it's easy and accurate to find out how much money you should be getting paid in another currency.
marketingfixer
1,206,812
Most Common Array Methods JavaScript in 2023
Let’s talk about JavaScript Array. If you are looking for a job or learning JavaScript these methods...
0
2022-09-29T18:01:04
https://dev.to/aliegotha/most-common-array-methods-javascript-in-2023-11b3
javascript, webdev, programming, computerscience
Let’s talk about JavaScript Array. If you are looking for a job or learning JavaScript these methods might be very helpful during coding interviews. Here you can find how to prepare for a [coding interview in one week](https://devhubby.com/thread/how-to-prepare-for-coding-interview-in-one-week). **PUSH Method use in JavaScript Array** The push() method adds new elements to the end of the array, and returns the new length. ```javascript const = arr = ["I" , "Am"]; arr.push("Developer"); // Output: ['I', 'Am', 'Developer' ] ``` **SLICE Method use in JavaScript Array** The slice() method selects a part of an array, and return the new array. ```javascript const arr = ["I" , "Am" , "Developer"]; arr.slice(1, 2); //Output: ['Am'] ``` **TOSTRING Method use in JavaScript Array** The toString() method converts an array to a string, and returns the result. ```javascript const arr = ["I" , "Am" , "Developer"]; arr.toString(); //Output: "I , Am , Developer" ``` **SHIFT Method use in JavaScript Array.** The shift() method removes the first element of an array, and return that element. ```javascript const arr = ["I" , "Am" , "Developer"]; arr.shift(); //Output: [ 'Am', 'Developer' ] ``` **MAP Method use in JavaScript Array.** The map() method creates a new array with the results of coding a function for every array element. ```javascript const arr = [1,4,9,16]; arr.map(x => x * 2 ); // OutPut: [ 2, 8, 18, 32 ] ``` **POP Method use in JavaScript.** The pop() method removes the last element of an array, and returns that element. ```javascript const arr = ["I" , "Am" , "Developer"]; arr.pop(); //Output: ['I' , 'am'] ``` **FILTER Method use in JavaScript.** The filter() method creates an array filled with all array elements that pass a test (Provided as a function) ```javascript const arr_filter = ["I" , "Am" , "Developer"]; var filter = arr_filter.filter(word => word.length > 3); // Output: ["Am" , "Developer"] ``` **INCLUDES Method use in JavaScript.** The includes() method determines whether an array contains a specific element. ```javascript const arr = ["I" , "Am" , "Developer"]; arr.includes("Am"); // Output: true ``` I would also recommend my article about "[Most Commonly Used JavaScript Methods in 2023](https://dev.to/aliegotha/most-commonly-used-javascript-methods-in-2023-3jo8)" and my list of great [programming and coding books](https://infervour.com/blog/best-programming-and-coding-books-to-read). If you like this article please Comment or Like :)
aliegotha
1,201,863
How to Create Complexity from Simple Rules
The following is a very simple bared-down code necessary to produce Mandelbrot in your browser ...
0
2022-09-24T08:58:48
https://dev.to/hunar4321/how-to-create-complexity-from-simple-rules-1448
javascript, tutorial, beginners
The following is a very simple bared-down code necessary to produce Mandelbrot in your browser ``` <canvas id="gardun" width="1000" height="1000"></canvas> <script> m = document.getElementById("gardun").getContext("2d") atom = function(x,y,c){m.fillStyle=c; m.fillRect(x,y,3,3)} for(y=1; y<1000; y++){ for(x=1; x<1000; x++){ dx = (x-500)/2000-0.12 dy = (y-500)/2000-0.82 a = dx b = dy for(t=1; t<200; t++){ d = (a*a)-(b*b)+dx b = 2*(a*b)+dy a = d H = d>200 if(H){atom(x,y,"rgb("+ t*3 +","+ t +","+ t*0.5 +")"); break} }}} </script> ``` The step by step tutorial and explanation is also available on YouTube for those interested: https://youtu.be/mzizK6ms-gY ```
hunar4321
1,202,093
A quick look at Qwik
I just got curious about Qwik, so I just started to read and research this new framework, so let's...
0
2022-10-02T06:33:14
https://dev.to/omher/quick-look-at-qwik-3pjb
javascript, react, webdev, qwik
I just got curious about Qwik, so I just started to read and research this new framework, so let's take a first look at it. Qwik brings an entirely new rendering paradigm to the table called "resumability", which eliminates the need for Hydration. **_Hydration_** is the technique used by almost every framework in the community to make server-rendered websites fully interactive. ![Bottle leaking water](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dmom95l4gotwa7yyng2h.jpg) To understand what makes Qwik special, we need to understand the current problem of existing frameworks, the key to a performance lighthouse score is to use less JavaScript, but the problem with web development is that to implement the features your customers want you need more JavaScript but to make your site fast you need less JavaScript. This reminds me the "[The chicken or the egg causality dilemma](https://en.wikipedia.org/wiki/Chicken_or_the_egg)". ![Chick egg problem 3d](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/75e1u909ajjm7kztp3gq.png) In almost every existing framework, you start with a big amount of kilobytes of JavaScript out of the box, let's take for example a react app application, you will need react and also react-dom and then you start adding your code, and this size starts to increment, the size will scale of based on how much your application code you have in the page. ![Reason why bundle is slow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xf2nrlr5rlhdb0uapbx9.jpg) Another reason for this slowness is because on the initial page load the frameworks need to hydrate the DOM and rebuild/bootstrap the entire component tree from the ground up, it's like watching a movie that can't be paused, if you need to restart the application by hitting the refresh button in the browser, it needs to re-execute all the JavaScript from the beginning to get back to where it was. ![Initial flow of page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qqafu5vv2kcu1e3sfsp2.jpg) Frameworks like `Astro.js` has recognized this problem and use a technique called "Partial Hydration", to selectively hydrate the DOM, but Qwik cuts out hydration altogether, like is not even necessary, it delivers instantly interactive HTML, which means in theory you should be able to get a perfect lighthouse performance score, no matter how big and complex your JavaScript code base is. That sounds too good to be true, but how Qwik solve it 🤔? The key innovation here is that a Qwik app can be fully serialized as HTML, in other words at any moment you can hit the pause button and capture all the data and closure in the application and represent it all as an HTML string, that's huge for server-side rendering because by the time that HTML gets to the browser, it's just pick up where the server left off without needing to execute any JavaScript at all and that's what they coined the term "resumability". ![JSX Code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nestg1zzy2mdwrdevzol.png) ![HTML DOM application](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/23a1j9xhmp1ejbyj2tal.png) Another thing that makes this magic possible is lazy loading which is built in as a primitive part of the framework. To understand it, let's look at some code, at first looks like a react app that uses functional components and `JSX`, but what's the deal with this dollar sign? It represents a lazy loaded boundary and what you will find is that everything is lazy loaded and that even includes things like event handlers that are close over the state of the application, that's kind of crazy because how does this chunk of JavaScript know the state of the application? if it's lazy loading. ![Lazy loading](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1gwy2sldpzwk17z2lnpw.gif) You can see in the network, we will found zero JavaScript on the initial page load and the JavaScript will be loaded until we click the button and it contains the code we want to execute and also has access to the lexical environment to update state that might be share by other components, which itself comes from another lazy loaded chunk. **The takeaway here is that you don't need to load any JavaScript until the user interacts with the UI.** If we take a look at the build result of our application, you will notice a ton of tiny chunks, less that one kilobytes each, instead of a single large bundle and because of that, Qwik can scale infinitely and you can add more JavaScript to your app and it will create another tiny chunk. ![Build result](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bl73b9e9clndz5awm1fr.png) This was my first look at Qwik and I have to say that I really like the approach and will like to try it more with more complex use-cases. ### Resources - [Official Docs](https://qwik.builder.io/) - [Qwik on github](https://github.com/BuilderIO/qwik) - [Qwik app, I used in post](https://github.com/OmerHerera/qwik-omher-app) Thanks for reading.
omher
1,202,328
Did I even learn anything?
It's been really busy at the job. I've been working on my tracks for music. I started the project...
0
2022-09-25T00:58:06
https://dev.to/hethinksthrice/did-i-even-learn-anything-3nok
beginners, python, spaceinvaders, gamedev
It's been really busy at the job. I've been working on my tracks for music. I started the project development for the space invaders game. I worked on the basic functions that is needed to create a screen for the game. Its interesting to create a blank screen. I felt accomplished just creating a black screen. Tomorrow, I plan to work on this more before work. One of the biggest challenges is that I feel like I'm not learning...but I know that coding takes time...and practice...similar to making music. I will keep updating on my progress as much as possible.
hethinksthrice
1,202,417
How i can edit cell of v-data-table on double mouse click. (Inline update) by using vue + vuetify ??
I am trying to make changes in props of v-data-table for in line editing i.e On double click on any...
0
2022-09-25T04:38:57
https://dev.to/princesinghpnjr/how-i-can-edit-cell-of-v-data-table-on-double-mouse-click-inline-update-by-using-vue-vuetify--148p
javascript, devops, vue
I am trying to make changes in props of v-data-table for in line editing i.e On double click on any cell i can edit data of that cell without using v-edit dialog.
princesinghpnjr
1,202,653
While we're here, we might as well...
“We’re already here, so we might as well…” Have you ever said something like this? This is an...
0
2022-09-27T09:49:00
https://jhall.io/archive/2022/09/25/while-were-here-we-might-as-well.../
batchsize, continuousdelivery
--- title: While we're here, we might as well... published: true date: 2022-09-25 00:00:00 UTC tags: batchsize,continuousdelivery canonical_url: https://jhall.io/archive/2022/09/25/while-were-here-we-might-as-well.../ --- “We’re already here, so we might as well…” Have you ever said something like this? This is an example of “batching”. My wife and I live in Europe. My parents live in the midwest United States. My wife’s parents lives in Guatemala. Flying across the ocean and changing time zones is a non-trivial price to pay for visiting family. So we tend to batch our holidays. That is to say, when we visit the US, we try to also visit Guatemala. “We’re already here (in the Americas), so we might as well visit the rest of our family….” This phenomenon was made worse by a combination of COVID travel restrictions, and the birth of our firstborn whose passport was delayed. So earlier this year we took a _long_ trip to both Guatemala and the US. Nearly three months. During COVID, our longest trip away from home was about 5 days, to northern France. A comfortable day’s drive away. Why didn’t we do the samething in France? “While we’re, here, we might as well also visit Paris, and Versailles, and Nice… and how about Rome, too?” It’s probably obvious. The cost (money, transit time, jet lag, etc) to visit any of those places is much lower than the cost of visiting our families in the Americas. So we naturally try to make the longer, more expensive trip “more worth it” by batching things together. We also, for the same reasons, make trips to the Americas much less frequently than we make trips around Europe. We naturally do the same thing when creating software. When deploying our software is a long or difficult task, we tend to try to make each deployment “more worth it”. We bundle more changes together. We spend more time preparing each one. The trouble is: large batches are much more stressful, and more risky. Both with regard to holiday travel, as well as delivering software. The good news is: there’s not any physical ocean between you and your next software deployment, as there is between me and my in-laws. Unlike in the physical world, there are very few physical constraints on how quickly we can deliver software, and on how small our batches can be. In other words, in the vast majority of cases, software delivery can be made the equivalent of a 5 minute walk to the corner store. Or maybe even just a casual walk to the kitchen for a glass of water. Every. Single. Time. * * * If this seems like magic to you, or a pipe dream that could never be reality on your team, I’d like to invite you to attend my [Lean CD Seminar](https://jhall.io/leancdseminar/), starting October 3. It’s 4 weeks of video instruction, interactive Q&A and a slack community, focused on improving your software delivery using proven techniques I’ve employed at a number of companies. It’s €189 EUR, and comes with a [money back guarantee](https://jhall.io/leancdseminar/#guarantee). I’d love to see you there! * * * _If you enjoyed this message, [subscribe](https://jhall.io/daily) to <u>The Daily Commit</u> to get future messages to your inbox._
jhall
1,202,740
The short-form of if else statement
The short form of if else statement as used in JavaScript // here is a short-form of if...
0
2022-09-25T16:14:13
https://dev.to/geraldkaparo/the-short-form-of-if-else-statement-2m8i
## The short form of if else statement as used in JavaScript ``` // here is a short-form of if else statement: function Major(Engineering){ return Engineering === 'Software Engineering' ? "It's a course of heroes, You can be proud of!" : `Which type of ${Engineering}?`; } ```
geraldkaparo
1,202,772
vscode shortcuts
Y'all may already know these. I'm new to vscode, and have found these shortcuts helpful: In a new...
0
2022-09-25T18:02:43
https://dev.to/iseanc/vscode-shortcuts-1c2o
productivity, vscode
Y'all may already know these. I'm new to vscode, and have found these shortcuts helpful: - In a new HTML file, type `!` or `html`, and choose from one of the options Emmet produces, to get a basic HTML template. - In HTML/CSS files, use `CTRL/COMMAND + K, CTRL + C`, to add a comment block. Sadly, it doesn't seem to work in a JavaScript (.js) context. - Highlight one or more lines of code and hit `CTRL/COMMAND + /` to comment/uncomment the block. Very handy!!
iseanc
1,202,873
Working On My First Pull Request
Last week I was presented with the opportunity to contribute to Izyum, a static site generator tool...
0
2022-09-25T21:30:05
https://dev.to/tdaw/working-on-my-first-pull-request-338f
opensource, git, javascript, beginners
Last week I was presented with the opportunity to contribute to [Izyum](https://github.com/Myrfion/izyum), a static site generator tool that creates an HTML file for each `.txt` file provided. Since the application had no existing support for processing markdown files, I began by filing an [issue](https://github.com/Myrfion/izyum/issues/9) for it. ### Overview of the Issue Since Izyum lacked the feature of processing markdown files, my Issue was about implementing initial support for h1, h2, and bold text. I clearly outlined all the features I had in mind in the Issue. ### Writing the Code After filing the Issue, the next step was to write the code and make sure it passed all tests. I was able to implement `h1` and `h2` in the first phase in a new branch named `issue-9`, as I will still trying to figure out how to add support for bold text. Later, after doing some research and through the process of trial and error, I was able to implement support for bold text. #### Challenges Encountered Some of the challenges I faced while working on the code were: - Finding a way to ignore H1 and H2 from being bolded as headings are in bold by default - Coming up with descriptive variable names ### Creating Pull Request I created a [PR](https://github.com/Myrfion/izyum/pull/10) when I was able to Izyum support H1 and H2 text. [Tymur](https://github.com/Myrfion) was able to provide me with constructive feedback on how to improve the code. I made revisions to the code to match the feedback and pushed another commit to `issue-9`. In a final commit, I was able to implement support for bold text and update the `README.md` file with instructions on proper usage of Markdown syntax accepted by the SSG. Later, the commit was merged with the `main` branch. ### Final Outcome - Izyum is now able to support markdown files - Users can add `# ` at the beginning of the line to mark it as Heading 1, and the content of the line will be added between the opening and closing `h1` tag. - Users can add `## ` at the beginning of the line to mark it as Heading 2, and the content of the line will be added between the opening and closing `h2` tag. - Users can add text between a pair of `**` to mark it as bold, and the text will be added between the opening and closing `strong` tag. To learn more about how to use Izyum with its brand new markdown support, click [here](https://github.com/Myrfion/izyum#implemented-optional-features-).
tdaw
1,202,878
THE FINITE STATE MACHINE
Introduction As developers, we often make automation that requires changes from the end...
19,926
2022-09-25T20:26:31
https://dev.to/asapconet/the-finite-state-machine-3n8a
computerscience, programming, devrel, web3
## Introduction As developers, we often make automation that requires changes from the end user or following conditions we set our programs to run, and in doing so we are making initializations that are ruled by the Finite state machine. This is applied mostly in programming and can also be used in different areas like mathematics, artificial intelligence, design of digital systems, compilers, computational linguistics, and the list goes on. Today we will learn precisely how it works as well as its variations. ### What is a Finite State Machine? The finite state machine can be seen as a "model" that performs certain computational activities which assume an initial state and spins through different[specified] states and back to the normal [accepting state]. This machine can be used to manipulate computer programs[talk about serialization and logical expressions], which require hardware and software manifestations. The finite state machine[FSM] is never complete without the recall of its Computer science origin under the topic of '*Automata theory*'. **Automata**[*as plural*], **automaton** *in singular*, in the light we can call the FSM, Finite State Automata-**FSA** [which I prefer]. Remember I talked about **Turing completeness** in my _[Ethereum Virtual Machine article](https://asapa1.hashnode.dev/the-ethereum-virtual-machine)_, which pointed out how the EVM is Turing complete, and if a machine is 'Turing complete' it is said to perform all types of computational manipulations, which arguably makes it an infinite state machine. On the other hand, every other state machine that does not solve all computational problems is said to be Turing incomplete. **NB** Turing machine > FSM. ![Handshake from editionf.com](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pfkry87yntyzyd33l51u.jpeg) **Generally**, the finite state machine is capable of receiving an action [instruction] that will lead to a change in the state. ## Types of FSA There are two types of Finite state automata namely; - Determinitic Finite Automata [DFA] - Non-Deterministic Finite Automata [NDFA or NFA] ### DFA Oh yeah, it can also be called a Deterministic Finite Acceptor, which takes defined strings of symbols that determine if the Finite machine rejects or accepts, hence running through the sequence of strings and giving out exactly what the symbol of the finite machine states. The DFA is defined by a 5-tuple element; Q, ∑, δ, q0, F all with a unique definition. Wanna know more follow [here](https://en.wikipedia.org/wiki/Deterministic_finite_automaton). Remember mortal combats yeah, '_>><<Z_' [forward forward back back z]? haha, now this combination satisfies the state[brutality] in the DFA of a particular character, hence when completed will execute the action behind it. Chin ching ;-) ![DFA img from [Brillient.org](https://ds055uzetaobb.cloudfront.net/brioche/uploads/rHpmPKo6lq-fsm_prob1.png?width=1200)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7g72yqyfl6b4nwwt31er.png) This image displays deterministic finite automation that accepts only the alphabet[a,b,c,d] with of multiple of 4, having the initial state S1, and S4 as accept state. This means that the string "abaac" leads to the state sequence S1, S2, S1, S2, S2, S4 and is hence accepted. ### NDFA Unlike the DFA this can move to any combination in the machine sequence without having any strict bounds as to happen or not, this means that it can have more than a transition function. It accepts input strings provided that it has a compatible space matching the string in the final[accept] state. ![NDFA img from [tutorialspoint](https://www.tutorialspoint.com/automata_theory/images/acceptability_of_strings_by_dfa.jpg)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkey6fd4pgbz1pajw0k5.jpg) Now, this diagram can accept all combinations that will lead to it ending on the acceptance state, hence number combinations like [0, 1, 101, 1001...] are welcomed cause they end at the accept state of 'd'. And also combinations like [1, 01, 10, 0110...] cannot be accepted obviously. This is actually termed better cause it actually saves time, accepts null values, and does not take too many state and transitions to complete execution. ### Application of FSA #### Light Switch **State**: On or Off **Transition**: The state 'on' changes when the state 'off' is applied #### Elevator **State**: Static, Dynamic **Transition**: When the floor number is pressed it changes from the static state to its dynamic state hence taking the user to the desired floor and back to the static state. #### ATM Machine **State**: Dispense, Static, Sing... **Transition**: The machine changes its state if conditions for withdrawing are 1k met from static to dispense or sing some motivational songs to a broke user. ## Conclution The state machine act as a wardrobe in that you keep your party, burial, and dating clothes, and when the mood is triggered according to your outfit you go ahead and put them on, hence changing the state of your dressing. Never will I go without saying thank you for scrolling to the end, I know a lot people are waiting for you to share this so they can scroll to the end and give thumbs up too, and might even say something we want. **Macho Garcias!**
asapconet
1,203,026
Longest Substring With K Distinct Characters
Problem Statement # Given a string, find the length of the longest substring in it with no more than...
0
2022-09-26T03:14:19
https://dev.to/zeeshanali0704/longest-substring-with-k-distinct-characters-bhf
javascript, leetcode
Problem Statement # Given a string, find the length of the longest substring in it with no more than K distinct characters. Example 1: Input: String="araaci", K=2 Output: 4 Explanation: The longest substring with no more than '2' distinct characters is "araa". ```js const longestSubstringWithKDistinctCharacters = (str, k) => { let map = new Map(); let temp = ""; let max = 0; let stri = ""; let end = 0; while (str.length > end) { const nextChar = str[end]; if (map.size < k && !map.has(nextChar)) { map.set(nextChar, 1); temp = temp + nextChar; end++; } else if (map.size <= k && map.has(nextChar)) { map.set(nextChar, map.get(nextChar) + 1); temp = temp + nextChar; end++; } else if (map.size === k && !map.has(nextChar)) { while (map.size === k) { // save the current if (temp.length > max) { max = temp.length; stri = temp; } let startValue = temp[0]; map.set(startValue, map.get(startValue) - 1); if (map.get(startValue) === 0) { map.delete(startValue); } temp = temp.substring(1); } } } return stri; }; console.log(longestSubstringWithKDistinctCharacters("csbebbbi", 3)); console.log(longestSubstringWithKDistinctCharacters("araaci", 2)); console.log(longestSubstringWithKDistinctCharacters("araaci", 1)); console.log(longestSubstringWithKDistinctCharacters("cbbebi", 3)); ```
zeeshanali0704
1,203,034
VSCode Custom Colors Per A Project
Learn this great productivity tip to quickly identify different VS Code windows(projects). If you...
0
2022-09-28T01:27:04
https://dev.to/codingwithadam/vscode-custom-colors-per-a-project-15cd
vscode, javascript, productivity, beginners
Learn this great productivity tip to quickly identify different VS Code windows(projects). If you have ever worked on several projects you know that it can be pain to quickly identify different open VS Code windows when they all look the same. By applying a distinct color to either the title bar, side bar or status bar you can quickly identify different projects at a glance. For example, if you are working on a backend and frontend project you can add a splash of color to the side bar to quickly find the correct project. To add color to individual VS Code projects we use the workspace settings to apply custom colors to various aspects of the VS Code window. Vs Code has 2 types of settings user and workspace. User settings are global and apply to all VS Code windows. Workspace settings apply to individual projects. Most settings can be configured in the settings menu for user and workspace. As soon as a workspace settings is added through the VS Code GUI it will create a .vscode folder and settings.json file below that. The manual process to add workspace settings is as follows. Create a .vscode folder in the root of the VSCode project. In that folder place a settings.json file. Guided by intellisense add customizations. For more details check out the following video: {% embed https://youtu.be/AgeMrOPyHzE %}
codingwithadam
1,203,152
How to Make Tic Tac Toe Using React
In this article, we will make a very famous game tic tac toe using react. Since tic tac toe is very...
0
2022-09-26T06:30:06
https://dev.to/reactjsguru/how-to-make-tic-tac-toe-using-react-1ce7
javascript, react, programming, webdev
In this article, we will make a very famous game tic tac toe using react. Since tic tac toe is very famous and cool game which gives some nostalgia of our childhood memory. Also, this game is so popular among the developers, and they have made this game in many other programming languages. As we know that JavaScript is so popular among developers, we will make this game using ReactJS. In this we will design the board for it and also, it will be two player game. We will add some logic to identify the winning conditions, and we will make a count of the wins for ‘x’ and ‘o’. What is Tic Tac Toe Game? Tic Tac Toe is a two-player game in which the objective is to take turns and mark the correct spaces in a 3×3 (or larger) grid. Think on your feet but also be careful, as the first player who places three of their marks in a horizontal, vertical or diagonal row wins the game. So this is the objective of this game, one player have ‘x’ and other one have ‘o’ sign. [read more](https://reactjsguru.com/how-to-make-tic-tac-toe-using-react/)
reactjsguru
1,203,368
Running multiple version of JDK in Windows commandLine, the fun? way
In this blog I'll walk you through on how to config your various terminal to dynamically switch JDK...
0
2022-09-26T16:50:24
https://dev.to/zeagur/running-multiple-version-of-jdk-in-windows-commandline-the-fun-way-4goj
java, productivity, terminal, windows
In this blog I'll walk you through on how to config your various terminal to dynamically switch JDK version in a painless? and a bit cooler way right from your terminal. ## Table Of Contents * [Prepare JDK and prerequisites](#prepare-jdk-and-prerequisites) * [Command Prompt](#command-prompt) * [Powershell 5 & 7](#powershell-5-amp-7) * [Cmder](#cmder) --- > why not use the wsl? yes, we can config it easily like we did in any bash in linux, which is cool 😁 and another great thing I found out recently about WSL is that it can execute windows binary file like those `exe` directly from wsl terminal. Don't believe me? here's an example 😜 > I'll test on `oc` cli that I'm using to do stuff with my openshift cluster as an example in this case. - let's start with check where the `oc.exe` is located ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avr2nxd5uwffmcbdlmqj.png) - test some command ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yd66i00938vbjh9u3dpy.png) - now switch to ubuntu wsl terminal and notice the `oc` bin that belong to this ubuntu wsl ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gywvplwwvdkf0afhvli7.png) - let's test some command, we will noticed the `unauth` response since I didn't share the config between windows/wsl ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a0yzh7m74bsni8i4eky8.png) - now let's `cd` into the windows folder with the `exe` file we want to test, in order to access our windows file, the path will have to be pre-fix with `/mnt/` and followed by normal drive letter without colon symbol > for example: /mnt/c/windows/system32/ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4s0c3tijtsjnxzpcgcy.png) - when call the `oc.exe` file we'll notice the response returned normally instead of `unauth` like the above command, this because we invoked the one that belong in windows env which is the same as we invoked in windows `cmd`, amazing isn't it? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzkzho3aow1qm8i7pgdg.png) --- ok but that's not the point of this blog, if you're like me who don't like context-switching too much and only use it when neccessery, let's continue see what we can do to run multiple JDK in windows env. --- ## 🤖 Prepare JDK and pre-requisites usually when we installed JDK package through windows installer we'll ended up with something like this ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u9wy9q4j8jxxtke4ilyb.png) and when we run the java version check it'll run one of these 3 depends on how you installed it, notice that I didn't set the `JAVA_HOME` yet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7uw1w1rlc54gpgi5ualq.png) next, you'll need [chocolatey](https://docs.chocolatey.org/en-us/choco/setup#install-with-cmd.exe) package manager for its `refreshenv` utility, if you don't want to install chocolatey or other reason, you can get this util directly from [chocolatey github](https://github.com/chocolatey/choco/blob/1.1.0/src/chocolatey.resources/redirects/RefreshEnv.cmd) then register the `bat` in `PATH` environment variable and that's it for preparation part. ## 💻 Command Prompt in order to do `alias` or other custom command, we'll have to go extra miles(or Km) to config this `ol-cmd` terminal(seriously, please replace it with something powerful Microsoft 😗) - Choose the location to store our new script files, in this case I'll use `cmd_script` in my `%userprofile%` to store it <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kpbpmnj0c9k0qmzr7g6m.png"> <figcaption>create new folder to use as script location</figcaption> </figure> --- - add it to the `PATH` by going to environment variable settings <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/twf2ikzi8lfwjwq4muae.png"> <figcaption>type `env` in search bar</figcaption> </figure> --- <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v0etbm5scwxvxz6y2t4o.png"> <figcaption>click on `Environment Variable`</figcaption> </figure> --- <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/slgx2o4b02ynr2rlcivf.png"> <figcaption>look for `PATH` then edit it to add our script location we created earlier</figcaption> </figure> --- - create new folder named `alias` inside the script folder then add it to the `PATH` <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30g3drw9tfliiigu4033.png"> <figcaption>alias folder will serve as our alias</figcaption> </figure> --- - create new script called `usejdk.cmd` or `usejdk.bat` <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vfguszegm7wf3a02tm2r.png"> <figcaption>create new script in the script folder</figcaption> </figure> > the above process should be the same when you add `refreshEnv.cmd` if you didn't use chocolatey --- - add the following script to the `usejdk` bat file ```shell @echo off set JDK_VERSION=%~1 if "%JDK_VERSION%" == "" ( goto usage ) else if "%JDK_VERSION%" == "8" ( set JAVA_HOME="C:\Program Files\Eclipse Adoptium\jdk-8.0.345.1-hotspot" ) GOTO :eof :usage ECHO Please select your Java Version ECHO Usage: useJDK [version] EXIT /B 1 ``` > change the JAVA_HOME path to suit your need and add addtional condition if you have more than 2 JDK - Test the script if it's working correctly ```shell C:\Users\Rujra>usejdk 11 C:\Users\Rujra>echo %JAVA_HOME% "C:\Program Files\Eclipse Adoptium\jdk-11.0.16.101-hotspot" C:\Users\Rujra>java -version openjdk version "1.8.0_345" OpenJDK Runtime Environment (Temurin)(build 1.8.0_345-b01) OpenJDK 64-Bit Server VM (Temurin)(build 25.345-b01, mixed mode) C:\Users\Rujra> ``` > noticed that even though we set the `JAVA_HOME` the java runtime didn't changed, that's because it's registered in `PATH` env, hardcoded. <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ip0bkhzeqjf5qjaig0i0.png"> <figcaption>the hard-coded path env that we didn't touch earlier</figcaption> </figure> --- - this can be solve in 2 ways 1. remove the jdk/bin path from our `PATH` env then add a new one with `%JAVA_HOME%\bin` 2. leave it be but move those down to the bottom of `PATH` env then add the java home from `1.` and move it up to the top - this method can be useful if you want to run something like `where java` which will list all knows java binary in `PATH` <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9i32leoxhzu8vgkgz286.png"> <figcaption>the PATH env after some changes</figcaption> </figure> --- - let's try again ```shell C:\Users\Rujra>usejdk 11 C:\Users\Rujra>echo %JAVA_HOME% "C:\Program Files\Eclipse Adoptium\jdk-11.0.16.101-hotspot" C:\Users\Rujra>java -version openjdk version "1.8.0_345" OpenJDK Runtime Environment (Temurin)(build 1.8.0_345-b01) OpenJDK 64-Bit Server VM (Temurin)(build 25.345-b01, mixed mode) ``` > still not working 😾 - now let's check the `PATH` ```shell C:\Users\Rujra>PATH PATH=C:\Program Files\WindowsApps\Microsoft.WindowsTerminal_1.14.2282.0_x64__8wekyb3d8bbwe;%JAVA_HOME%\bin;[other path env blah blah blah]; ``` > noticed the %JAVA_HOME% in PATH? that's where our `refreshenv` will come into play next - modify the `usejdk` script with our long-awaited `refreshenv` util ```shell ... ) else if "%JDK_VERSION%" == "8" ( set JAVA_HOME="C:\Program Files\Eclipse Adoptium\jdk-8.0.345.1-hotspot" refreshenv ) ... ``` - third?? time's a charm, I hope 😗. ```shell C:\Users\Rujra>usejdk 11 Refreshing environment variables from registry for cmd.exe. Please wait...Finished.. C:\Users\Rujra>PATH PATH="C:\Program Files\Eclipse Adoptium\jdk-11.0.16.101-hotspot"\bin;[other path env blah blah blah]; C:\Users\Rujra>java -version openjdk version "11.0.16.1" 2022-08-12 OpenJDK Runtime Environment Temurin-11.0.16.1+1 (build 11.0.16.1+1) OpenJDK 64-Bit Server VM Temurin-11.0.16.1+1 (build 11.0.16.1+1, mixed mode) ``` > now it's working, WOW, so easy! 🤯 I almost had a migraine tbh🤷‍♂️. --- ## 💠 Powershell 5 & 7 In powershell, it's a bit easier to manipulate the env var/alias since everything is stored inside "special" `env:` and `alias:` PSdrive > why is it matters? because we will use these to help us switch between multiple JDK in a moment > ! important: before we proceed please make sure you have set the `PATH` variable with `%JAVA_HOME%\bin` at the top like we did in `CMD` steps #### Environment Variable to display the current env we can do the following ```shell PS C:\Users\Rujra> dir env: Name Value ---- ----- ALLUSERSPROFILE C:\ProgramData ChocolateyInstall C:\ProgramData\chocolatey ``` > to query specific key, just add the key to path `dir env:\some_thing` > to query value only, add `$` to env `$env:some_thing` --- #### Alias to display Alias is the same as Env var ```shell PS C:\Users\Rujra> dir alias: CommandType Name Version Source ----------- ---- ------- ------ Alias % -> ForEach-Object Alias ? -> Where-Object Alias CFS -> ConvertFrom-String 3.1.0.0 Microsoft.PowerShell.Utility ``` > to query specific key, just add the key to path `dir alias:\some_thing` > to query value only, add `$` to env `$alias:some_thing` --- #### Steps - in order to persist configuration across terminal session we'll have to create a profile for powershell by do the following command ```powershell new-item -path $profile -itemtype file -force ``` - now open the profile in your favourite ide and add the following script ```powershell function java8 { $Env:JAVA_HOME = "[your JDK ROOT PATH]" refreshenv } function java11 { $Env:JAVA_HOME = "[your JDK ROOT PATH]" refreshenv } ... ``` - re-open your powershell and see if the function getting load normally, however on the first time you might ran into this problem ```powershell . : File ...\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170. At line:1 char:3 + . '...\WindowsPowerShell\Microsoft.Powe ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : SecurityError: (:) [], PSSecurityException + FullyQualifiedErrorId : UnauthorizedAccess ``` > this [stackoverflow](https://stackoverflow.com/a/26955050) answer give a great explanation on how to solve this issue without compromise much of your machine security - after powershell started successfully, now it's the time to test our little script ```powershell PS C:\Users\Rujra> java8 Refreshing environment variables from registry for cmd.exe. Please wait...Finished.. PS C:\Users\Rujra> $env:PATH %JAVA_HOME%\bin;[other PATH env blah blah blah] PS C:\Users\Rujra> $env:JAVA_HOME C:\Program Files\Eclipse Adoptium\jdk-8.0.345.1-hotspot PS C:\Users\Rujra> java -version openjdk version "17.0.4.1" 2022-08-12 OpenJDK Runtime Environment Temurin-17.0.4.1+1 (build 17.0.4.1+1) OpenJDK 64-Bit Server VM Temurin-17.0.4.1+1 (build 17.0.4.1+1, mixed mode, sharing) ``` > noticed that even with `refreshenv` it didn't worked, this is because it refresh `cmd.exe` and not `powershell.exe` we'll have to come up with another solution - from previous test, we'll see that it didn't worked as we expected, so, let's try another method by modify our script as below ```powershell function java8 { $Env:JAVA_HOME = "[your JDK ROOT PATH]" $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") } function java11 { $Env:JAVA_HOME = "[your JDK ROOT PATH]" $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") } ... ``` > this will replace `refreshenv` with manually load new env var from the machine env var to current `powershell` session - now let's close and re-open `powershell` and see if this time it working correctly. ```powershell PS C:\Users\Rujra> $env:PATH %JAVA_HOME%\bin;[other PATH env blah blah blah] PS C:\Users\Rujra> java -version openjdk version "17.0.4.1" 2022-08-12 OpenJDK Runtime Environment Temurin-17.0.4.1+1 (build 17.0.4.1+1) OpenJDK 64-Bit Server VM Temurin-17.0.4.1+1 (build 17.0.4.1+1, mixed mode, sharing) PS C:\Users\Rujra> $env:JAVA_HOME PS C:\Users\Rujra> java11 PS C:\Users\Rujra> $env:PATH C:\Program Files\Eclipse Adoptium\jdk-11.0.16.101-hotspot\bin;[other PATH env blah blah blah] PS C:\Users\Rujra> $env:JAVA_HOME C:\Program Files\Eclipse Adoptium\jdk-11.0.16.101-hotspot PS C:\Users\Rujra> java -version openjdk version "11.0.16.1" 2022-08-12 OpenJDK Runtime Environment Temurin-11.0.16.1+1 (build 11.0.16.1+1) OpenJDK 64-Bit Server VM Temurin-11.0.16.1+1 (build 11.0.16.1+1, mixed mode) ``` > now it's working, a bit easier to config than `CMD` for sure, cheers 🥂 --- ## 🖲️ Cmder Now we've come to the last terminal I'll config for this blog, The powerful [Cmder](https://cmder.app/) Cmder itself has many features off-the-shelf including linux commands that you can use alongside your normal windows commands whether it's `ls -lha` `dir` etc, but now I'll add a bit more power to it by adding ability to dynamically switch JDK version with alias. > ! important: before we proceed please make sure you have set the `PATH` variable with `%JAVA_HOME%\bin` at the top like we did in `CMD` or `Powershell` steps - first, after you've downloaded and extracted cmder to your machine, you'll have something like this below ```bash D:\Programs\Cmder λ ls -lha total 169K drwxr-xr-x 1 Rujra 197609 0 Feb 3 2022 ./ drwxr-xr-x 1 Rujra 197609 0 Sep 4 13:47 ../ drwxr-xr-x 1 Rujra 197609 0 Jan 26 2020 bin/ -rwxr-xr-x 1 Rujra 197609 139K Jan 17 2022 Cmder.exe* -rw-r--r-- 1 Rujra 197609 33 Mar 9 2020 cmder_shell.bat drwxr-xr-x 1 Rujra 197609 0 Sep 27 18:56 config/ drwxr-xr-x 1 Rujra 197609 0 Jan 26 2020 icons/ -rw-r--r-- 1 Rujra 197609 1.1K Jan 17 2022 LICENSE drwxr-xr-x 1 Rujra 197609 0 Oct 31 2021 opt/ drwxr-xr-x 1 Rujra 197609 0 Feb 3 2022 vendor/ -rw-r--r-- 1 Rujra 197609 0 Jan 17 2022 'Version 1.3.19.1181' ``` - inside `config` folder there will be some config files, but we will be focus on 2 configs only - user_aliases.cmd - user_profile.cmd ```bash D:\Programs\Cmder\config λ ls -lha total 508K drwxr-xr-x 1 Rujra 197609 0 Sep 27 18:58 ./ drwxr-xr-x 1 Rujra 197609 0 Feb 3 2022 ../ -rw-r--r-- 1 Rujra 197609 4.2K Sep 27 18:58 clink.log -rw-r--r-- 1 Rujra 197609 377K Sep 27 18:56 clink_history -rw-r--r-- 1 Rujra 197609 121 Sep 27 18:58 clink_history_23924 -rw-r--r-- 1 Rujra 197609 97 Sep 27 18:56 clink_history_23924.removals -rw-r--r-- 1 Rujra 197609 0 Sep 27 18:56 clink_history_23924~ -rw-r--r-- 1 Rujra 197609 507 Oct 31 2021 clink_settings -rw-r--r-- 1 Rujra 197609 2.0K Mar 16 2022 cmder_prompt_config.lua -rw-r--r-- 1 Rujra 197609 29K May 11 2021 mini_dump.dmp drwxr-xr-x 1 Rujra 197609 0 Feb 3 2022 profile.d/ -rw-r--r-- 1 Rujra 197609 887 Jan 17 2022 Readme.md -rw-r--r-- 1 Rujra 197609 672 Sep 26 19:29 user_aliases.cmd -rw-r--r-- 1 Rujra 197609 967 Sep 26 18:42 user_profile.cmd -rw-r--r-- 1 Rujra 197609 408 Dec 23 2018 user_profile.ps1 -rw-r--r-- 1 Rujra 197609 54K Jun 23 16:18 user-ConEmu.xml ``` - inside `user_profile.cmd` add the following config ```bat ... :: JAVA config set JAVA_8_HOME="[your JDK ROOT PATH]" set JAVA_11_HOME="[your JDK ROOT PATH]" :: Other jdk you're using ... ``` - now after we set env on the profile, we'll set alias in `user_aliases.cmd` to switch the java version whenever we needed. ```bat ;= JAVA_HOME java8=set JAVA_HOME=%JAVA_8_HOME%&refreshenv java11=set JAVA_HOME=%JAVA_11_HOME%&refreshenv ``` - saves and open up `Cmder` and let's try the new alias ```bash C:\Users\Rujra λ echo %PATH% %JAVA_HOME%\bin;[other PATH env blah blah blah]; C:\Users\Rujra λ echo %JAVA_HOME% %JAVA_HOME% C:\Users\Rujra λ java -version openjdk version "17.0.4.1" 2022-08-12 OpenJDK Runtime Environment Temurin-17.0.4.1+1 (build 17.0.4.1+1) OpenJDK 64-Bit Server VM Temurin-17.0.4.1+1 (build 17.0.4.1+1, mixed mode, sharing) C:\Users\Rujra λ java11 Refreshing environment variables from registry for cmd.exe. Please wait...Finished.. C:\Users\Rujra λ echo %PATH% "C:\Program Files\Eclipse Adoptium\jdk-11.0.16.101-hotspot"\bin;[other PATH env blah blah blah]; C:\Users\Rujra λ echo %JAVA_HOME% "C:\Program Files\Eclipse Adoptium\jdk-11.0.16.101-hotspot" C:\Users\Rujra λ java -version openjdk version "11.0.16.1" 2022-08-12 OpenJDK Runtime Environment Temurin-11.0.16.1+1 (build 11.0.16.1+1) OpenJDK 64-Bit Server VM Temurin-11.0.16.1+1 (build 11.0.16.1+1, mixed mode) ``` > work like a charm 😁👾 --- and that's it, thank you for reading, this is my first (finished)blog, any feedbacks & suggestions are gladly welcome! 🙌 ---
zeagur
1,203,398
Create Azure AD groups in bulk using PowerShell with a CSV input.
I recently had to replicate an Azure AD group structure from one tenant to another and wrote a super...
0
2022-09-26T12:48:14
https://dev.to/lakkimartin/create-azure-ad-groups-in-bulk-using-powershell-with-a-csv-input-1n0e
powershell, azure, devops
I recently had to replicate an Azure AD group structure from one tenant to another and wrote a super simple script to speed things up. ## Set up PowerShell Firstly you will need to install the Azure AD PowerShell module which allows you to intreact with AAD. ```powershell Install-Module AzureADPreview ``` Authenticate to your Azure Active Directory tenant by running ```powershell Connect-AzureAD -TenantId "<insert tenant id here>" ``` ## Retrieve AD Groups I used a simple command to export the AD groups from the tenant I want to copy from to a csv file ```powershell Get-AzureADMSGroup | select DisplayName | Export-Csv c:\development\azureadusers.csv -NoTypeInformation ``` You can modify the export path as needed. ## Create the AD Groups Now that you have the AD groups exported you can run the script. Make sure you re-authenticate to the new Tenant where you want o set up the groups. Here is an example of what my CV looks like: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/14aor2fhip60zvjbohz0.png) Note that I am only filtering the Display Names of the AD groups without the descriptions and owners. I haven't had a chance to script this up but it's fairly straight forward to do. Run the script to set up the groups: ```powershell $Groups = Import-Csv -Path "C:\development\azureadusers.csv" foreach($Group in $Groups) { New-AzureAdGroup -DisplayName $Group.DisplayName -MailEnabled $False -SecurityEnabled $True -MailNickName "NotSet" } ``` The AD groups are now created! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/paookjm9t9dfrihqqc9d.png) There are some limitations with the Azure AD module. It doesn't support all parameters so for example you can't enable groups to be assignable to roles. You can achieve the same by using the Az.Resources [module](https://learn.microsoft.com/en-us/powershell/module/az.resources/new-azadgroup?view=azps-8.3.0) You can also expand the script to add descriptions to the AD groups and replicate/assign owners. Hope this helps.
lakkimartin
1,203,608
display: inline magic space!
Ever wondered when you adding a image to a HTML, you might find little space around it. It's not...
0
2022-09-26T15:41:01
https://dev.to/abhijitez/display-inline-magic-space-2caa
Ever wondered when you adding a image to a HTML, you might find little space around it. It's not border, margin, padding, width or height; than what is it. In html, images are treated as inline by default, so image's are generally text in mind of html and as usual text has spacing between them. The addition height is because of the line height, which can be removed by adding this `line-height: 0` or making the image tag to display: block. I personally like the first approach as this doesn't change the flow of the element.
abhijitez
1,203,842
DAY 6: FIZZ-BUZZ-MULTITHREADED
Hey! it's day 6 of 10 days of coding challenge with I4G. Today's task was to write a code that...
0
2022-09-26T22:43:03
https://dev.to/eazylink/day-6-fizz-buzz-multithreaded-54f9
java, codereview
Hey! it's day 6 of 10 days of coding challenge with I4G. Today's task was to write a code that executes fizz buzz multitreaded output. **Thought process:** **Understanding of the problem:** The first approach was to understand the task and the constraint attached. Here we writing a code that takes in an integer and prints fizz if the number is divisible by 3, Buzz if the number is divisible by 5, FizzBuzz if the number is divisible by both 3 and 5 or print out the number meets none of the above condition. **Solution:** To achieve this, we need another that will be used to iterate through the given integer. For each iteration the conditions above are tested and the function that meet the condition is executed. **Algorithm:** 1. initialize an integer count to 1 2. Set a while loop for each function with condition: count < n (the given integer) 3. If count is divisible by 3 but not by 5, call the printFizz function 4. If count is divisible by 5 but not by 3, call the printBuzz function 5. If count is divisible by both 3 and 5 call the printFizzBuzz function 6. If count is not divisible by both 3 and 5, call the printNumber function checkout the code here: https://leetcode.com/problems/fizz-buzz-multithreaded/submissions/
eazylink
1,204,134
Free Comprehensive Webinar: Ways on How to Optimize JavaScript Apps
Were you thinking of improving your JavaScript applications’ performance? Join Dmytro Mezhenskyi,...
0
2022-09-27T07:42:03
https://medium.com/@IderaDevTools/free-comprehensive-webinar-ways-on-how-to-optimize-javascript-apps-e371a3fe74e3
javascript, frontend, webdev, filestack
Were you thinking of improving your JavaScript applications’ performance? [Join Dmytro Mezhenskyi](https://register.gotowebinar.com/register/347587937260509964?utm_source=PressRelease&utm_medium=Leads%20Acquisition&utm_content=webinar0922), Developer Expert of [Decoded Frontend](https://www.youtube.com/c/DecodedFrontend), on **September 30, 2022 — at 11 AM CT**, and learn about various optimization techniques you can use to speed up your apps. ## The webinar will cover a wide range of topics, including: - Web graphic optimizations - Network optimizations - Page loading - JavaScript runtime optimizations - Tools to measure performance - Team performance Because of its broad scope, optimizing applications can be challenging even for experienced developers. Developers spend a lot of time improving the performance of their apps due to heavy research that sometimes leads to more confusion. Learning about optimization might be overwhelming for some, but you don’t have to scour the internet and collate all the necessary information. Dmytro has gathered some practical strategies to help you keep your JavaScript applications in their best shape. The webinar aims to let you discover the different ways you can speed up your apps in a clear and organized manner. Maintaining applications can be exhausting enough. Enhancing their performance for the experience users will enjoy doesn’t have to be complicated. This webinar is brought to you by [Filestack](https://www.filestack.com/). Are you interested? [Click here](https://register.gotowebinar.com/register/347587937260509964?utm_source=PressRelease&utm_medium=Leads%20Acquisition&utm_content=webinar0922) to register now; see you there!
ideradevtools
1,204,136
DPS909 Blog - Lab 3: Managing Simultaneous Changes
This week for my Open-Source course (DPS909), we were introduced the concept of multiple simultaneous...
0
2022-09-28T21:39:44
https://dev.to/alexsam29/dps909-blog-lab-3-managing-simultaneous-changes-988
opensource
This week for my Open-Source course (DPS909), we were introduced the concept of multiple simultaneous changes in a single project. The purpose of this lab was to help us practice creating branches for specific (simultaneous) issues, merging, dealing with merge conflicts and identifying commits on GitHub. ## Background In [lab 2](https://dev.to/alexsam29/dps909-blog-lab-2-contributing-and-submitting-pull-requests-355o) we practiced creating pull requests (PR) on other people's repositories. In that lab, we only worked on one feature in one branch before creating the PR. Now it's time to start learning how to add multiple features, in parallel, on our own branch by working in separate branches. ## Features Added I continued working on my [Static Site Generator](https://github.com/alexsam29/ssg-cli-tool) and decided to add two more features. I created issues for each feature that I wanted to add. ### [Issue 14](https://github.com/alexsam29/ssg-cli-tool/issues/14): Support for HTML language codes The first issue I created focused on adding support for HTML language codes. In the original state, the HTML files generated automatically defaulted to `en` as the language code. I wanted to add the ability for a user to be able to specify the language for the generated HTML files by adding a `-l`/`--lang` argument. #### Code Changes To accomplish this task, I [added](https://github.com/alexsam29/ssg-cli-tool/commit/d49eaccf8216d3524985a469d1c142d470399097#diff-bfe9874d239014961b1ae4e89875a6155667db834a410aaaa2ebe3cf89820556R49-R52) a new option using the [yargs](https://www.npmjs.com/package/yargs) module. This allows me to read whatever the user inputs after using `-l`/`--l`. Following this, I [added](https://github.com/alexsam29/ssg-cli-tool/commit/d49eaccf8216d3524985a469d1c142d470399097#diff-bfe9874d239014961b1ae4e89875a6155667db834a410aaaa2ebe3cf89820556R93-R97) a conditional statement that defaults to `en-CA` whenever a language code is not inputted. Creating the HTML files containing the user specified language code was a simple process. This tool already uses the `create-html` npm package, so all I needed to do was [add](https://github.com/alexsam29/ssg-cli-tool/commit/d49eaccf8216d3524985a469d1c142d470399097#diff-bfe9874d239014961b1ae4e89875a6155667db834a410aaaa2ebe3cf89820556R201) the `lang` option to the `createHTML` function. ### [Issue 15](https://github.com/alexsam29/ssg-cli-tool/issues/15): Support for inline code blocks in Markdown files The second issue I created was focused on adding support for inline code block parsing in Markdown files. So, enclosed text with a single backtick should cause the text to get rendered as `<code>...text...</code>` in the generated HTML file #### Code Changes This was an even simpler feature to add because there was already existing code to support bold and italic text in Markdown files. The only thing I needed to [add](https://github.com/alexsam29/ssg-cli-tool/commit/1ba7058388d424a347709237ec4c51c77fb3a219#diff-bfe9874d239014961b1ae4e89875a6155667db834a410aaaa2ebe3cf89820556R160) was one line that calls the `processMD` function; which was added in an earlier [pull request](https://github.com/alexsam29/ssg-cli-tool/pull/13). This function is able to parse anything with an opening and closing tag. I just needed to specify the tag, in this case it was a backtick( ` ). ### Problems and Lessons Learned I had no real issues during this lab. I attribute this to my effort to trying to make the least amount of code changes possible when adding new features. I only had to fix one conflict in the issue 15 [merge commit](https://github.com/alexsam29/ssg-cli-tool/commit/1ba7058388d424a347709237ec4c51c77fb3a219) and it had nothing to do with the actual code, just conflicts in `README.md`. I also learned how to work with multiple code changes in parallel on separate topic branches. It was a little difficult holding myself back on making unrelated changes whenever I noticed issues. But as I said, holding back was probably the reason I had the least amount of conflicts as possible.
alexsam29
1,204,492
How we moved from Artifactory and saved $200k p.a. Part 2 of 5 - Design
Introduction Welcome back to Part 2 of our 5-part series on 'How we moved from Artifactory...
19,948
2022-09-28T15:33:26
https://www.paulmowat.co.uk/blog/how-we-moved-from-artifactory-and-saved-200k/part-2-design
aws, artifactory, codeartifact, ecr
## Introduction Welcome back to Part 2 of our 5-part series on 'How we moved from Artifactory and saved $200k p.a'. If you are just joining we recommend jumping back to the beginning and starting from there. ## Decision making The nature of larger projects such as these requires plenty of discussion and decision-making around temporary and permanent processes. We had lots of data to migrate and we needed to be efficient in our decision-making process. We decided upon using [Architecture Decision Records](https://adr.github.io/) to log the key implementation decisions which significantly helped us deliver consistency throughout our support and guidance. As it turned out, undertaking this method of logging was not onerous and we ended up with records for around a dozen key strategic choices that we made; an example of one being the choice to utilise a spot fleet of EC2 workers to perform the migration versus something like AWS Batch or ECS. At first glance, we expected to go with a solution based on AWS Batch or AWS ECS but we had requirements to move resources such as Windows container images and it was so helpful to be able to easily recover the decision steps when we moved to create tooling to support this. ## Workshopping Workshopping commenced on the 10th of June 2022 and we had until the 4th of July 2022 to perform the required analysis, design and implement our solution. ![workshop image](https://www.paulmowat.co.uk/static/images/how-we-moved-from-artifactory-and-saved-200k/part-2/kvalifik-5Q07sS54D0Q-unsplash.jpg) ### Analysis of requirements One of the first items of business was to determine which artefact types it was essential to support, those that would be unsupported and any transitions from these to corresponding supported types. Then we would need to determine the options to migrate to, whilst fulfilling the necessary obligations to supported packages and platforms. Over the past few years, engineering at Advanced has been consolidating its toolchain and programming languages adopted by default. In no way intent on dissuading reviews of new or emerging options, but rather adding consistency in those used and bringing a larger collective intelligence to engineering as a whole. From analysing our usage within Artifactory we settled upon support for [npm](https://www.npmjs.com/), [NuGet](https://www.nuget.org/), generic artefacts (zip, exe, dll etc), [Docker](https://www.docker.com/) images and [Maven](https://maven.apache.org/). We quickly determined that our biggest challenge would be Docker images, accounting for greater than 50% of our consumed storage, with several repositories holding more than 1 TB of image data. Latterly, Maven would also prove challenging. From this analysis, we were acutely (and financially) aware that we were also wastefully holding onto obsolete build artefacts. We decided to use this as an opportunity to leverage our engineering teams to review and select the versions of artefacts that our products needed to retain. This would help reduce the scale of the migration ahead somewhat and perform some well-overdue housekeeping. After all, there is no point in migrating and paying for artefacts that are no longer required. ### Solution analysis Having gathered an understanding of what needed support and delivery, we had to identify where we were going to migrate to. AWS is our preferred Cloud Provider and platform, as well as a key technical partner. It was a natural choice to look at their services for our solution. From investigation, we found that [AWS CodeArtifact](https://aws.amazon.com/codeartifact/) was a decent fit for supporting npm, NuGet, Maven and Python (if required in the future), however, it was not a complete match for all our requirements. Favourably, [S3](https://aws.amazon.com/s3/) is an excellent fit for generic artefacts, and [Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/) is perfectly appropriate for Docker images (even leading us to correct misunderstandings between images and repositories internally!). We now had the artefact types we needed to support at a high level and where they were going to migrate to. ### Solution design Now we firmly knew our direction, we needed to decide how we get there. Initially, we considered publishing guidance around best practices for various AWS services to satisfy our artefact requirements but ultimately that was deemed unmaintainable. We wanted to finish the project with our artefact management strategy in a much better position than it started. Significant to us was ensuring we had the ability to define convention, consistency, clear guidance and expectations. We aimed to provide a maintainable solution that continues to build upon the best practices as it matures. This led us to agree that it was important for the culmination of the migration to result in a new, custom service that any engineering team within Advanced could consume. ***Advanced Artefacts*** was born. We now had two streams we needed to complete within the project: 1. The Advanced Artefacts service 2. The Migration We will get into the detail around these in future posts. ## Support channels As mentioned previously Advanced has over seven hundred engineers from across the globe working on many projects and we needed to identify a strategy for how we could support them in the best way possible. We came up with the following three-pronged approach. ### Documentation We decided early that we needed to document all parts of the project to allow our engineering teams to self-serve where possible. Without good documentation, there is no way a team of four can support over seven hundred developers. We focused on providing some getting-started documentation that walked teams through the process in an end-to-end fashion. Then proving the appropriate reference documentation for each step. This covered items such as the support channels available, each team's responsibilities, the migration preparation and also information on how to use our new Advanced Artefacts service both locally and from our CI/CD pipelines. A great deal of time was spent pouring over this, it was however crucial to the success of the project. ### Clinics A technique that has worked fairly well for our organisation is the idea of online clinics. We held clinics twice a week for the duration of the project. We used the first two clinics to kick off the project with our engineering teams. This helped us set timelines around key milestones and clear expectations on what was being delivered. After that, they were reserved for anyone to drop into, receive updates and ask for assistance directly. ### Microsoft teams channel Microsoft Teams is our internal communication tool, therefore, we created a dedicated channel that we would use for communicating any important updates to the engineering teams. They could also ask us questions or get further clarification as required outside clinic sessions. The artefacts team committed to replying to the questions as soon as possible ensuring teams were unblocked and able to progress quickly. ## Next up Now we have our design in place we need to start implementing it. Next up, we will cover the creation of the Advanced Artefacts service.
paulmowat
1,205,012
Heroku Alternative: How to Deploy A ReactJS Rails API App to Render
The Fall of Heroku Heroku recently announced they are discontinuing some of their free...
0
2022-09-28T01:06:15
https://dev.to/nickmendez/heroku-alternative-how-to-deploy-a-reactjs-rails-api-app-to-render-2k17
react, rails, webdev, tutorial
## The Fall of Heroku **Heroku recently announced they are discontinuing some of their free plan options starting November 28.** Free plans widely popularized Heroku as a go-to platform as a service (PaaS) for developers to deploy their applications and APIs. Consequently, many hobbyists, programming courses, and existing Heroku projects are looking to migrate to a comparable alternative. After spending time weighing out the available options , I decided to deploy my future projects to Render.com . ## Why Choose Render.com? Render.com provides free plans for deploying static sites , web services with HTTP/2 and full TLS, PostgreSQL databases, and Redis integration. The deployment process is beginner-friendly and the documentation is easy to follow. Upgrading to a starter plan costs less than a monthly Spotify account. **Setup an AWS S3 bucket for file storage, and you have yourself a great starting point for any application at no cost.** **If you're starting from a ground zero, follow these instructions to setup your environment to ensure the deployment goes smoothly. If not , skip down to "Configuring Your App for Render"** ## Create A React Rails API App from scratch 1. create a Rails API in a new project folder `rails new app-name --api --minimal --database=postgresql` 2. cd into the folder and run the following command in terminal `bundle lock --add-platform x86_64-linux --add-platform ruby` 3. Build out an MVP with routes , controllers, models , migrations, and create a database. 4. create a new React application in a client folder inside the root directory `npx create-react-app client --use-npm` 5. Add a request proxy inside the client/package.json file `"proxy": "http://localhost:3000"` 7. Update the start script in client/package.json file `"scripts": { "start": "PORT=4000 react-scripts start" }` At this point , your file structure should resemble something like this, minus a few optional folders and config files, which we will soon add. ![file-structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2qjnv1nclxy33wud825w.png) ## Configuring Your App for Render Now you're ready to configure your app to deploy it to Render. Follow the following steps to configure your application for deployment. **1) Sign up for [Render.com](https://render.com/).** **2) Open config/database.yml file and find the production section.** - Modify it to gather the database configuration from the DATABASE_URL environment variable. - This should be located at the bottom of the file. ``` production: <<: *default url: <%= ENV['DATABASE_URL'] %> ``` **3) Open config/puma.rb and make sure the following lines are uncommented.** - Feel free to clear out the file and copy/paste the code block below. ``` max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 } min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count } threads min_threads_count, max_threads_count port ENV.fetch("PORT") { 3000 } environment ENV.fetch("RAILS_ENV") { "development" } pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" } workers ENV.fetch("WEB_CONCURRENCY") { 4 } preload_app! plugin :tmp_restart ``` **4) Open config/environments/production.rb and enable the public file server when the RENDER environment variable is present.** - This is an existing line of code in the file , which we are modifying. ``` config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present? || ENV['RENDER'].present? ``` **5) Create a build script to build out your app when you deploy it.** - This script will automate the build process and update your application every time it is deployed. - inside your app-name/bin folder , create a file called render-build.sh - copy/paste the following code into the render-build.sh file ``` #!/usr/bin/env bash # exit on error set -o errexit bundle install # clean rm -rf public # build npm install --prefix client && npm run build --prefix client # migrate bundle exec rake db:migrate # postbuild cp -a client/build/. public/ ``` - your file should look like this when you're done ![build script](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tyknodd2z8b73cds6u74.png) **6) Make sure the script executes by running the following command in terminal.** `chmod a+x bin/render-build.sh` **7) Commit and push these changes to your GitHub Repository** ## Deploying To Render **8) Inside the root directory , create a file name render.yaml. Copy/paste the following code into the file. Replace app_name with your application's name** ``` databases: - name: app_name databaseName: app_name user: app_name services: - type: web name: app_name env: ruby buildCommand: "./bin/render-build.sh" startCommand: "bundle exec puma -C config/puma.rb" envVars: - key: DATABASE_URL fromDatabase: name: app_name property: connectionString - key: RAILS_MASTER_KEY sync: false ``` The file should look like this when you're done. ![render yaml file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwnx1jzn1mk8p8w0hx4m.png) **9) Commit and push these changes to your GitHub Repository** **10) Navigate to the Render website and make sure you're logged in.** On the Render Dashboard, go to the [Blueprint](https://dashboard.render.com/blueprints) page and click the New Blueprint Instance button. Select your repository (after giving Render the permission to access it, if you haven’t already). **11) Your repository branch should already be set to "main", or select whichever branch you want to deploy. I used my "main" branch.** **12) In the deploy window, set the value of the RAILS_MASTER_KEY to the contents of your config/master.key file.** Then click Approve. **13) Navigate to your Dashboard and click on the web service you just created.** ![web service](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24j5xao3r8ei6pad3t1f.png) **14) Click on the blue Manually Deploy button on the top right hand side and select "clear build cache and deploy"** ![deploy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5sxeluamjlu4evkprzkv.png) 15) Wait for your app to successfully deploy and click on the generated URL _your_app_name.onrender.com_ at the top. And that's how you deploy a full-stack React/Rails API application to Render. If you have any questions , feel free to drop a comment below. If you're a Flatiron Student, drop a comment below if this has helped you. To see the final result, visit my gitHub repository and Render website if you'd like. - Website: https://extrackt.onrender.com/ - Repository: https://github.com/nickmendezFlatiron/extrackt --- #### Resources - [Getting Started with Ruby on Rails on Render](https://render.com/docs/deploy-rails#create-a-build-script)
nickmendez
1,205,168
Nhập khẩu Trung Quốc Đại Dương
Với nhiều năm hoạt động trong lĩnh vực nhập hàng, chúng tôi cam kết đem lại cho khách hàng dịch vụ...
0
2022-09-28T06:31:32
https://dev.to/nktqdaiduong/nhap-khau-trung-quoc-dai-duong-189m
Với nhiều năm hoạt động trong lĩnh vực nhập hàng, chúng tôi cam kết đem lại cho khách hàng dịch vụ nhập hàng Trung Quốc với nhiều phương thức và linh hoạt trong cách thanh toán, đảm bảo đem lại cho khách hàng các dịch vụ tốt nhưng giá vô cùng hợp lý. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9xiqnhr7frcjqved10dr.png)
nktqdaiduong
1,205,194
Create Harbor Server on Ubuntu VM
Copy this script on your Ubuntu VM and update first user inputs section for IP, hostname and FQDN. I...
0
2022-11-01T12:39:32
https://dev.to/ashishchorge/create-harbor-server-on-ubuntu-vm-2f30
kubernetes
Copy this script on your Ubuntu VM and update first user inputs section for IP, hostname and FQDN. I tested this script on Ubuntu 22. ``` # This script will install Harbor server # # User Inputs > #================================================== > export my_hostname=<Harbor server short host name> export my_fqdn=<Harbor server FQDN> export my_ip=<IP Address of the Harbor server> #================================================== echo "Make sure your VM is configured with proper hostname, static IP address and its entry is mentioned in your DNS server" read -n 1 -r -s -p $'Press enter to continue... else Control + c to stop \n' die() { local message=$1 echo "$message" >&2 exit 1 } # precheck echo "Doing precheck " ping $my_hostname -c 2 || die 'command failed' ping $my_ip -c 2 || die 'command failed' nslookup $my_fqdn || die 'command failed' nslookup $my_fqdn | grep $my_ip || die 'command failed' echo "==== Doing precheck ====" || die 'command failed' ping $my_hostname -c 2 || die 'command failed' nslookup $my_fqdn || die 'command failed' nslookup $my_fqdn | grep $my_ip || die 'command failed' echo "1. Enable ssh on the vm" || die 'command failed' apt-get update || die 'command failed' apt install openssh-server || die 'command failed' echo "2. Verify ssh service is up and running" || die 'command failed' systemctl status ssh || die 'command failed' echo "3. Update the apt package index" || die 'command failed' apt-get update || die 'command failed' echo "4. Install packages to allow apt to use a repository over HTTPS" || die 'command failed' apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y || die 'command failed' echo "5. Add Docker's official GPG key" || die 'command failed' curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - || die 'command failed' sudo apt-key fingerprint 0EBFCD88 || die 'command failed' echo "6. Setup a stable repository" || die 'command failed' echo -ne '\n' | add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" || die 'command failed' echo "7. Install docker-ce" || die 'command failed' apt-get update || die 'command failed' apt-get install docker-ce docker-ce-cli containerd.io -y || die 'command failed' echo "8. Install current stable release of Docker Compose" || die 'command failed' curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose || die 'command failed' echo "9. Apply executable permissions to the binary" || die 'command failed' chmod +x /usr/local/bin/docker-compose || die 'command failed' echo "10. Verify installation" || die 'command failed' docker-compose --version || die 'command failed' echo "11. Download the Harbor installer" || die 'command failed' curl -L https://github.com/goharbor/harbor/releases/download/v2.4.3/harbor-offline-installer-v2.4.3.tgz -o /root/harbor-offline-installer-v2.4.3.tgz || die 'command failed' echo "12. Extract the Harbor installer" || die 'command failed' tar -xvzf /root/harbor-offline-installer-v2.4.3.tgz || die 'command failed' echo "13. Generate a CA certificate private key" || die 'command failed' openssl genrsa -out ca.key 4096 || die 'command failed' echo "14. Generate the CA certificate" || die 'command failed' openssl req -x509 -new -nodes -sha512 -days 3650 -subj "/C=US/ST=CA/L=Palo Alto/O=HomeLab/OU=Solution Engineering/CN=$my_fqdn" -key ca.key -out ca.crt || die 'command failed' echo "15. Generate a private key" || die 'command failed' openssl genrsa -out $my_fqdn.key 4096 || die 'command failed' echo "16. Generate a certificate signing request" || die 'command failed' openssl req -sha512 -new -subj "/C=US/ST=CA/L=Palo Alto/O=HomeLab/OU=Solution Engineering/CN=$my_fqdn" -key $my_fqdn.key -out $my_fqdn.csr || die 'command failed' echo "17. Generate an x509 v3 extension file" || die 'command failed' cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=$my_fqdn DNS.2=$my_hostname IP.1=$my_ip EOF echo "18. Use the v3.ext file to generate a certificate for the Harbor host" || die 'command failed' openssl x509 -req -sha512 -days 3650 -extfile v3.ext -CA ca.crt -CAkey ca.key -CAcreateserial -in $my_fqdn.csr -out $my_fqdn.crt || die 'command failed' echo "19. Provide the certificates to harbor and docker" || die 'command failed' sudo mkdir -p /data/cert || die 'command failed' sudo mkdir -p /etc/docker/certs.d/$my_fqdn/ || die 'command failed' sudo cp ~/$my_fqdn.crt /data/cert/$my_fqdn.crt || die 'command failed' sudo cp ~/$my_fqdn.crt /etc/docker/certs.d/$my_fqdn/$my_fqdn.crt || die 'command failed' sudo cp ~/ca.crt /etc/docker/certs.d/$my_fqdn/ca.crt || die 'command failed' sudo openssl x509 -inform PEM -in ~/$my_fqdn.crt -out /etc/docker/certs.d/$my_fqdn/$my_fqdn.cert || die 'command failed' sudo cp ~/$my_fqdn.key /data/cert/$my_fqdn.key || die 'command failed' sudo cp ~/$my_fqdn.key /etc/docker/certs.d/$my_fqdn/$my_fqdn.key || die 'command failed' sudo systemctl restart docker || die 'command failed' echo "20. Copy and update certificate on Harbor VM" || die 'command failed' cp $my_fqdn.crt /usr/local/share/ca-certificates/update-ca-certificates || die 'command failed' echo "21. Configure the Harbor YML file manually" || die 'command failed' cp /root/harbor/harbor.yml.tmpl /root/harbor/harbor.yml || die 'command failed' ##### update the yml file manually #echo "Update the yml file manually /root/harbor/harbor.yml and execute below command" || die 'command failed' #echo "/root/harbor/install.sh --with-notary --with-chartmuseum || die 'command failed'" cp /root/harbor/harbor.yml.tmpl /root/harbor/harbor.yml || die 'command failed' cat /root/harbor/harbor.yml | sed -e "s/hostname: reg.mydomain.com/hostname: $my_fqdn/" > /tmp/1 || die 'command failed' cat /tmp/1 | sed -e "s/certificate: \/your\/certificate\/path/certificate: \/root\/$my_fqdn.crt/" > /tmp/2 || die 'command failed' cat /tmp/2 | sed -e "s/private_key: \/your\/private\/key\/path/private_key : \/root\/$my_fqdn.key/" > /tmp/3 || die 'command failed' cp /tmp/3 /root/harbor/harbor.yml || die 'command failed' echo "22. Install with Notary, Clair and Chart Repository Service" || die 'command failed' /root/harbor/install.sh --with-notary --with-chartmuseum || die 'command failed' ```
ashishchorge
1,205,375
START: Developing an A/B testing platform for a streaming service
START is an online cinema partly owned by Megafon. In 2021, START turned to Evrone with the task of...
0
2022-09-28T11:46:24
https://evrone.com/start
testing, webdev, react, writing
START is an online cinema partly owned by Megafon. In 2021, START turned to Evrone with the task of creating a separate service for A/B testing. Its function is to accept a request with a user ID, distribute it among current experiments, and give this information back. START needed a user-friendly interface where analysts could set parameters themselves, make groups, and start and end experiments at a given time. It had to be connected to a service that collects the necessary data about users, on the basis of which it would be possible to collect groups for testing. [Read the full case study here.](https://evrone.com/start)
evrone
1,205,390
How to Synchronize Stocks in Several eCommerce Channels
eCommerce is the process of selling products online and maintaining inventory control, tracking and...
0
2022-09-28T12:11:28
https://dev.to/vkonoplia/how-to-synchronize-stocks-in-several-ecommerce-channels-1c80
tutorial, api, programming, webdev
eCommerce is the process of selling products online and maintaining inventory control, tracking and management. For eCommerce store owners, whether large or small, it is impossible to manage all this information manually. If a product is out of stock, it needs to be added in time so that customers do not switch to another site. Inventory synchronization between different eCommerce stores helps maintain an optimal amount of resources in all channels. For example, if an item is out of stock or a customer wants to return it, then it would be difficult for an eCommerce manager to know about these situations without using automation tools. Here is where inventory synchronization comes into play. Stock synchronization allows businesses to have access to information about their inventory at remote locations via software applications. In this article you will find how various B2B SaaS systems can synchronize stock across multiple eCommerce channels. ## Process of Inventory Synchronization in Different Software Inventory management software is able to track sold products in all online stores. Any retail seller will benefit from the inventory management system. The first one would not have to worry if the goods were sold out in one online shop and still available in others. This information is part of the stock synchronization process itself. ERP solutions also allow sellers to synchronize stock levels automatically and maintain stocks' accuracy and relevance on multiple channels. When a customer buys a product, such a system immediately changes the calculation of inventories. With automatic inventory updates, online store owners can prevent customers from becoming disappointed, and build trust and confidence among their customers. One of the most prominent features of WMS is the ability to track and control inventories through multiple channels. Any actual stock updates are automatically displayed on all sales channels. Another system that allows store owners to sync stock is multi-channel software. It will enable e-sellers to adjust stocks and automatically update them, preventing situations when the goods are not enough in stock and resale. Online store owners create their stores on different eCommerce platforms and marketplaces like Amazon or eBay. In addition, they cooperate with different suppliers. In this case, dropshipping automation software will be useful, which solves many problems related to dropshipping, [including stock synchronization](https://api2cart.com/business/how-dropshipping-software-providers-can-simplify-inventory-synchronization/?utm_source=dev.to&utm_medium=referral&utm_campaign=stocksyncv.k). This type of software automatically maintains inventory levels in synchronization with vendors and stores or eCommerce platforms. Automation also eliminates the possibility of any human error. ## Inventory synchronization workflow B2B eCommerce solutions typically synchronize stock every few minutes or sometimes in real-time mode. The [inventory synchronization process works](https://api2cart.com/api-technology/woocommerce-sync-products/?utm_source=dev.to&utm_medium=referral&utm_campaign=stocksyncv.k) as follows: - Merchants manually enter the number of products and distribute this info across various platforms, such as Wix or eBay. - The orders are placed on one of the channels on which online store owners sell - Orders are imported into B2B eCommerce solution, which automatically adjusts the inventory. - Once the system counts the stock, it automatically updates info on different sales channels. Retailers can track sales data and replenish inventories by using B2B SaaS solutions. However, to synchronize stock across multiple sales channels, these types of systems need to be integrated with eCommerce platforms. However, integration with each eСommerce platform manually is difficult. In addition, this is a rather massive task when considering several platforms. Therefore, any integration is a complex process and requires an expert to perform it perfectly. In addition, it is labor-intensive and expensive process. ## eCommerce Integration Challenges Integrating with multiple eCommerce platforms can be challenging for B2B SaaS companies. The following are some of the difficulties they may face: - The complexity of the process. Each eCommerce platform has its own specific architecture and logic, the study of which requires time and skills. - The need for qualified specialists. In the case of B2B applications of eCommerce, many users rely on integration. Therefore, a poorly developed API can cause significant problems and customer losses. - Integration takes time. Development of one integration takes at least a month. Each integration, as we said above, is unique. So you can multiply that time by the number of platforms you want to integrate with. - Costly integration. Each integration costs at least several thousand dollars. - When you finish integration, it is not the end of history. The integration needs further updating and maintenance. But, also, it will take time and resources because of the need for personnel to support integration. As an outcome, it will lead to a loss of time and money. However, finding a pre-integrated solution to connect to different shopping platforms will certainly help. ## How easy is it to synchronize stocks across multiple sales channels? With a firm intention to expand its business and save money, system providers are beginning to wander online searching for the right solution. Unified API integration solution helps make connection with the shopping platform simple and easy. To start using such service, you should: 1. Sign up for a new free account. 2. Add your store. 3. When a sale is made in one of your stores, you receive a notification of the order via the order.add event or by using the order.list method. 4. Update the inventory quantity on all sales channels with product.update API method with increase_quantity and decrease_quantity options. Instead of integrating with each eCommerce platform separately, the best solution is to use a single API. For example, these solutions provide integration with[ more than 40 shopping platforms](https://api2cart.com/supported-platforms/?utm_source=dev.to&utm_medium=referral&utm_campaign=stocksyncv.k) simultaneously. Such services will facilitate integration with favorite and widely used baskets as WooCommerce, Shopify, BigCommerce, X-Cart, OpenCart and other top platforms. In addition, the services offer all the necessary API methods, which allows providing a function of stock sync for online store owners. You can also receive, add, update and delete information about orders, products, customers, shipments, categories, etc.
vkonoplia
1,205,629
Low-Code vs No-Code: What’s the Difference?
While they are similar, low-code and no-code development have key differences While both low-code...
19,559
2022-10-04T18:58:16
https://duplocloud.com/blog/low-code-vs-no-code/
lowcode, nocode, cloud, coding
--- title: Low-Code vs No-Code: What’s the Difference? published: true tags: lowcode,nocode,cloud,coding canonical_url: https://duplocloud.com/blog/low-code-vs-no-code/ cover_image: https://duplocloud.com/wp-content/uploads/2022/09/low-code-vs-no-code.jpg series: Get to know low-code and no-code --- _While they are similar, low-code and no-code development have key differences_ While both low-code and no-code are meant to shorten development pipelines and break down organizational silos, business managers need to be aware of the differences between low-code vs no-code. The two expedited forms of development have nuances, with implications for how they’ll impact a company and its processes. Here’s an overview of the differences between the two methods, how they differ from traditional development, and examples of no-code/low-code platforms currently on the market. **Jump to a section…** * [The Differences of Low-Code vs No-Code Software Development](#differences) * [What Is No Code Software Development?](#what-is-no-code) * [What Is Low-Code Software Development?](#what-is-low-code) * [How Traditional Development Differs From Low-Code and No-Code](#traditional-development) * [No-Code vs Low-Code in Real Life](#no-code-vs-low-code) * [Choosing a Low-Code Application](#choosing-low-code) * [DuploCloud](#duplocloud) * [Choosing a No-Code Application](#choosing-no-code) * [Caspio](#caspio) * [Claris Filemaker](#claris-filemaker) * [Low-Code vs No-Code for Complex Software](#complex-software) ## The Differences of Low-Code vs No-Code Software Development Gartner [predicts that 80% of technology products and services](https://www.gartner.com/en/newsroom/press-releases/2021-06-10-gartner-says-the-majority-of-technology-products-and-services-will-be-built-by-professionals-outside-of-it-by-2024) will be built by people other than development professionals by 2024. Ultimately, this is a good thing; more people are transitioning from traditional development to low-code and no-code, which means more people are able to innovate in the tech industry. But what is low-code/no-code development, and how do the two terms differ? ### What Is No-Code Software Development? No-code platforms allow users to create applications and/or underlying technical underpinnings through simple graphical user interfaces, all without writing code. Creators with no pre-existing knowledge of software development can build applications that fit their needs, and those in supporting roles can build and automate the systems necessary to run applications, including those native to public cloud infrastructures like AWS or GCP. No-code platforms often feature templates and simple menus for application customization. While some no-code platforms provide the framework for designing front-end applications, such as website builder Squarespace, others focus on providing citizen developers with the tools they need to automate simplistic and menial internal tasks, allowing dedicated development teams to direct their energy on more specialized, time-consuming work. _For a more detailed explanation, check out the full article, “[What are No-Code Platforms?](https://dev.to/duplocloud/what-are-no-code-platforms-12l8)”_ ### What Is Low-Code Software Development? Low-code software development allows for lean code that is either prefabricated or auto-generated, typically created in a simple graphical interface that can interpret and fill in the blanks. The finished product can be of similar or better quality as something hand crafted, all at a fraction of the time and cost. [According to Gartner](https://www.gartner.com/reviews/market/enterprise-low-code-application-platform), an Enterprise Low-Code Application Platform features: * UI capabilities through responsive web and mobile applications. * Orchestration or choreography of pages, business processes, and decisions or business rules. * A built-in database. * “One button” deployment of applications. From-scratch development is becoming more complicated, increasing the chances of human error-caused security breaches, as well as other issues. Low-code development reduces the likelihood of those issues reaching the final product, and can ensure compliance with standards such as GDPR, PCI-DSS, HIPAA, and more. _For more information, check out the article, “[What are Low-Code Platforms?](https://dev.to/duplocloud/what-are-low-code-platforms-1cnm)”_ ## How Traditional Development Differs From Low-Code and No-Code Traditional development is costly, complicated, and time-consuming, requiring large teams of developers and months — if not years — to complete. Traditional development also requires significantly more maintenance than no-code and low-code. The latter two will typically have automated maintenance procedures baked into the software service, while the former has to have maintenance performed manually. No-code and low-code development allow for lean teams, reducing annual costs and labor. In October 2019, there were more than [900,000 unfilled IT positions](https://www.wsj.com/articles/americas-got-talent-just-not-enough-in-it-11571168626/) in the US alone, and the situation hasn’t improved. [Demand for IT professionals is expected to grow 13% by 2030](https://www.bls.gov/ooh/computer-and-information-technology/home.htm), while [demand for software developers is expected to grow by 22%](https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm), according to the Bureau of Labor Statistics. The industry is currently ill-equipped to meet these needs. Thus the need for no-code and low-code development platforms, which allow less-specialized employees to assist dedicated development teams with building and maintaining complex applications. Because many rudimentary tasks can be standardized and automated, no-code and low-code platforms provide opportunities for organizations to decrease deployment speeds, rapidly test and iterate, and reduce the chance of introducing human error. If you’re a leader looking to minimize time to market and accelerate your organization’s ability to release competitive software at a competitive rate, click below to check out DuploCloud's free whitepaper on how to deploy cloud applications 10x faster with a no/low-code DevOps platform. ## No-Code vs Low-Code in Real Life No-code and low-code development isn’t new; the term “low-code” originated in a 2011 [Forrester report](https://sdtimes.com/application-development/low-code-development-seeks-accelerate-software-delivery/) on emerging productivity platforms. These services are now accessible for both companies and consumers, making it far easier to launch and maintain applications. ### Choosing a Low-Code Application ![DuploCloud low-code application](https://lh6.googleusercontent.com/AsafDCXbZRNApxzgrgbVcjqAQiqScqm6Pm5oGETF7fyT-gOIABBia65FyXPX9SUd5wptcurPpNUwqh8DgmkdQf0LxbPaqUB7fCWIpYEMPX4v8NTQKej_Y3NoYTb7nowEBjtsVueCZeVSOpJ8QYdi2wfl-ftd1azkJOFP-C_y3Y_b8peXJdUEC1EkHA) #### DuploCloud **By DuploCloud, Inc.** **Website:** [duplocloud.com](http://duplocloud.com/) Designing cloud-native applications requires months of manual configuration to ensure security and regulatory compliance. DuploCloud offers low-code cloud Infrastructure-as-Code automation with built-in security and compliance with their terraform provider, reducing development times by a factor of ten. Organizations still require development teams with dedicated coding knowledge to build their applications. However, the DuploCloud platform greatly reduces the time needed for coding cloud infrastructure, and automates diagnostics, reporting, CI/CD pipeline implementation. These tools allow development teams to build, migrate, and deploy their on-premises applications to the cloud with ease. ### Choosing a No-Code Application ![Caspio no-code application](https://lh3.googleusercontent.com/z1HjJxNU3b231u5Cg3kNYP639SB8ruz6YcXQS7QviMM7vd5Q8V7GGomPw3QRoCpr_W4-4PWEgiWol_OLIflTxhnIrsQRA1rTJIDTx8CznstKWq4OgSHtWRRDhGtOXFYd4_32KvTz4jB6Pwthw1VcKoiBZstN0sGRnFRTlDISZlcn3u9QDkrINQvYbA) #### Caspio **By Caspio, Inc.** **Website:** [caspio.com](https://www.caspio.com/) Caspio specializes in providing a no-code platform for creating cloud-based, fully searchable database applications. Citizen developers can use the visual point-and-click development and automation tools to design flexible applications, while dedicated development teams can integrate them — along with any necessary data — into any web property. ![Claris Filemaker no-code application](https://lh5.googleusercontent.com/rbxxFeVuCqEm9kURzyiXz14Kg-w06TIxDAY_gA_LX3-gdIQNVFsYGL-070p3l86iL0zxl9ijfcSHzSnHg6yg9VrB9ykkizTzMx58pQhXOgu_EVH0hR0ucW_0ez4rwLyHFLpVnq4DJqXqVkW7VSqMe7UaMR7TPY5CcITzTMXAjzLetQ1n8gg_uJGxGw) #### Claris Filemaker **by Claris International Inc.** **Website:** [claris.com/filemaker](https://www.claris.com/filemaker/) Claris Filemaker was originally developed in the 1980s, and has since evolved into a fully-featured no-code business application development platform. Users can drag-and-drop pre-built elements like calendars and photo galleries to customize programs to their needs, or rely on templates for quickly designing business-ready CRMs, content libraries, and more. ## Low-Code vs No-Code for Complex Software Typically, no-code development is intended for simple, front end applications, though several organizations are making strides in providing more robust tools within a no-code framework. That said, more complex operations should look to low-code software development solutions for their increased flexibility and customizability. Low-code development gives developers finer control over their end product, while simultaneously reducing operational strain. Traditionally, developing complex software is costly, and prone to human error. Low-code software can reduce the financial cost of deploying cloud applications and speed up the development pipeline by 10x while minimizing mistakes. For organizations to be successful with low-code development, they need to find a manageable, intuitive partner platform. DuploCloud meets those needs, helping organizations create cloud compliant applications at a fraction of traditional development’s cost and time investment. DuploCloud eliminates human error from the equation with its low-code solution, enabling a faster go-to-market time without sacrificing quality or security.
zgover
1,206,121
Video SDK for easier integration
A wide variety of Video SDK More and more companies are joining the video service...
0
2022-09-29T02:50:34
https://www.zegocloud.com/blog/video-sdk
ios, swift, programming, video
## A wide variety of Video SDK More and more companies are joining the video service industry, and various **video SDKs are emerging**. It has also become increasingly difficult for developers to choose the Video SDK. How should we choose the **Video SDK** that suits us? ## Simple becomes mainstream The purpose of developers using the SDK is to obtain stable and high-quality services while saving development time. Therefore, while ensuring the stability and quality of the SDK, the convenience of access has become the goal pursued by various enterprises. It is a time saver for developers who build audio and video applications. For example, the latest UIKits product launched by [ZEGOCLOUD](https://www.zegocloud.com?_source=dev&article=32) has brought the convenience of integrating the video SDK to a new height. Developers can realize the audio and video call function in only 30 minutes. UI kits SDK provides 20+ UIKits, 50+ Components, supports Video calls, Voice calls, live streaming, and other scenarios to build, supports rich configuration item configuration, convenient and quick custom UI. ## Integrate video SDK Let's take **iOS Video SDK** as an example to experience the convenience brought by UIKits SDK step by step. ### 1) Add SDK to project Introduce ZegoUIKitPrebuiltCall SDK through the pod as follows, add pod 'ZegoUIKitPrebuiltCall' in Podfile file. Then execute the command `pod install` in Terminal. For detailed operation, please refer to [Quick Access Documentation](https://docs.zegocloud.com/article/14819?_source=dev&article=32). ```ruby target 'ZegoCallDemo' do use_frameworks! # Pods for ZegoCallDemo pod 'ZegoUIKitPrebuiltCall' end ``` ### 2) Import iOS video SDK to your project In the file that needs to be called the SDK interface, import the SDK through import. ``` import ZegoUIKitSDK import ZegoUIKitPrebuiltCall // YourViewController.swift class ViewController: UIViewController { //Other code... } ``` ### 3) Show the ZegoUIKitPrebuiltCallVC in your project Next, you only need to display ZegoUIKitPrebuiltCallVC in the module that needs to start the video call to complete the SDK access. Before entering the page, there are three steps that need to be done: - Go to [ZEGOCLOUD Admin Console](https://console.zegocloud.com?_source=dev&article=32), get the appID and app Sign of your project. - Specify the userID and userName for connecting the Call Kit service. - Create a callID that represents the call you want to make. ```swift // YourViewController.swift class ViewController: UIViewController { // Other code... var userID: String = <#UserID#> var userName: String = <#UserName#> var callID: String = <#CallID#> @IBAction func makeNewCall(_ sender: Any) { let config: ZegoUIkitPrebuiltCallConfig = ZegoUIkitPrebuiltCallConfig() let audioVideoConfig: ZegoPrebuiltAudioVideoViewConfig = ZegoPrebuiltAudioVideoViewConfig() let menuBarConfig: ZegoBottomMenuBarConfig = ZegoBottomMenuBarConfig() config.audioVideoViewConfig = audioVideoConfig config.bottomMenuBarConfig = menuBarConfig let layout: ZegoLayout = ZegoLayout() layout.mode = .pictureInPicture let pipConfig: ZegoLayoutPictureInPictureConfig = ZegoLayoutPictureInPictureConfig() pipConfig.smallViewPostion = .topRight layout.config = pipConfig config.layout = layout let callVC = ZegoUIKitPrebuiltCallVC.init(yourAppID, appSign: yourAppSign, userID: self.userID, userName: self.userName, callID: self.callID, config: config) callVC.modalPresentationStyle = .fullScreen self.present(callVC, animated: true, completion: nil) } } ``` ### 4) Configure your project Finally, you only need to add the camera and microphone permission to the iOS configuration file Info.plist, and then you can start to experience the audio and video call function. ```xml <key>NSCameraUsageDescription</key> <string>We require camera access to connect to a call</string> <key>NSMicrophoneUsageDescription</key> <string>We require microphone access to connect to a call</string> ``` ![video-sdk](https://resource.zegocloud.com/content_resource/2022/09/29/videosdk.jpg) ## Custom prebuilt UI ZEGOCLOUD Call Kit provides a wealth of custom interfaces that you can modify according to your needs. For example, on the call page, we can achieve the following effects: ### 1) Display my view when my camera is off If you want to still show your own video view when the camera is turned off, just set the `showMyViewWithVideoOnly` parameter in `ZegoUIkitPrebuiltCallConfig` to true to achieve this. ![layout_show_self](https://resource.zegocloud.com/content_resource/2022/09/29/layoutshowself.gif) ### 2) Hide my view when my camera is off If you want to hide your own video view when closing the camera, just set the `showMyViewWithVideoOnly` parameter in `ZegoUIkitPrebuiltCallConfig` to false to hide it. ![layout_hidden_self](https://resource.zegocloud.com/content_resource/2022/09/29/layouthiddenself.gif) ### 3) Dragging Small View If you want to implement the function of dragging Video view, just set the `isSmallViewDraggable` parameter in `ZegoUIkitPrebuiltCallConfig`. ![layout_draggable](https://resource.zegocloud.com/content_resource/2022/09/29/layoutdraggable.gif) ### 4) Switch the content of two video views If you want to switch the content of two Video views, just set the switchLargeOrSmallViewByClick parameter in ZegoUIkitPrebuiltCallConfig. ![layout_switch](https://resource.zegocloud.com/content_resource/2022/09/29/layoutswitch.gif) Here is the reference code: ```swift class ViewController: UIViewController { let selfUserID: String = "userID" let selfUserName: String = "userName" let yourAppID: UInt32 = YourAppID; // Fill in the appID that you get from ZEGOCLOUD Admin Console. let yourAppSign: String = YourAppSign; // Fill in the appSign that you get from ZEGOCLOUD Admin Console. @IBOutlet weak var userIDLabel: UILabel! { didSet { userIDLabel.text = selfUserID } } @IBOutlet weak var userNameLabel: UILabel! { didSet { selfUserName = String(format: "zego_%@", selfUserID) userNameLabel.text = selfUserName } } override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. } @IBAction func makeNewCall(_ sender: Any) { let config: ZegoUIkitPrebuiltCallConfig = ZegoUIkitPrebuiltCallConfig() let layout: ZegoLayout = ZegoLayout() layout.mode = .pictureInPicture let pipConfig: ZegoLayoutPictureInPictureConfig = ZegoLayoutPictureInPictureConfig() pipConfig.showMyViewWithVideoOnly = false; pipConfig.isSmallViewDraggable = true; pipConfig.switchLargeOrSmallViewByClick = true; layout.config = pipConfig config.layout = layout let callVC = ZegoUIKitPrebuiltCallVC.init(yourAppID, appSign: yourAppSign, userID: selfUserID, userName: self.selfUserName ?? "", callID: "100", config: config) callVC.modalPresentationStyle = .fullScreen self.present(callVC, animated: true, completion: nil) } } ``` --- [Sign up](https://console.zegocloud.com/account/signup?_source=dev&article=32) with ZEGOCLOUD, and get **10,000 minutes** free every month. ## Did you know? 👏 > **Like** and **Follow** is the biggest encouragement to me > **Follow me** to learn more technical knowledge > Thank you for reading :) ## Learn more This is one of the live technical articles. Welcome to other articles: {% embed https://dev.to/zegocloud/how-to-choose-android-voice-chat-sdk-1ojf %} {% embed https://dev.to/zegocloud/how-to-create-avatar-from-photo-57oa %} {% embed https://dev.to/zegocloud/how-to-build-a-live-streaming-app-81a %} {% embed https://dev.to/davidrelo/how-to-use-video-call-api-to-build-a-live-video-call-app-4k1b %}
davidrelo
1,206,203
My first pull request!
Hello everyone! Today, I worked on developing an extra feature on my friend's tool...
20,323
2022-09-29T03:48:11
https://dev.to/gulyapulya/my-first-pull-request-1dec
opensource, beginners, git, github
## Hello everyone! Today, I worked on developing an extra feature on my friend's [tool Rohan-SSG](https://github.com/rokaicker/StaticSiteGenerator). For that, I had to go through multiple steps in order to ensure that all of my changes are done safely following the best practices. First of all, I had to file an issue on his repo stating my enhancement intentions. I had to explain the goal and my solution idea. After that, I forked his repo to make the changes in a copy of his work under my profile instead of the original. I also created a new branch which had a self-explanatory name such as an issue number. I made all of my changes on that branch locally, pushed them back on GitHub and opened a pull request. My [issue](https://github.com/rokaicker/StaticSiteGenerator/issues/23) was about adding support for markdown syntax features such as italics and bold. The heading 1, heading 2, and a link features were already developed, so I had to integrate my new implementation into existing code. I looked through the file system, found where the logic was located for my part, understood the general coding and commenting style and followed it. I also updated the files used for testing to contain italicized and bold markdown text, so that my new featured could be tested. I made sure everything worked before committing. Lastly, I checked the documentation to see if I had to update any information, but there was nothing about currently supported syntax features. My friend reviewed the changes and everything looked good. However, he requested for me to add a line in the documentation to state all of the supported markdown styles. I did that and the [pull request](https://github.com/rokaicker/StaticSiteGenerator/pull/24) was accepted and merged to the main brunch. Generally, the experience was great! It was interesting to check out other's code and collaborate. Moreover, I was extremely excited to see my profile picture and name under the contributors tab. So proud. _______________________________________________ Besides that, today was also the day when someone has contributed to my repo for the first time! My friend added a markdown capability for italicized text to my repo and [requested a pull](https://github.com/gulyapulya/SSGulnur/pull/7). I thoroughly went through all of the changed lines and reviewed the code. It was lovely to see that the other person has followed all of my styling and coding conventions. So respectfully. I forgot to request to change the documentation so I did it myself as it was not a big deal. So far, I am getting much more comfortable with Git and Github. Cool!
gulyapulya
1,206,231
React js project shows vulnerabilities when executing "npm install"
I got a project from my client in which the backend is lumen and the front end is reactjs and I want...
0
2022-09-29T05:20:15
https://dev.to/mayareactdev/react-js-project-shows-vulnerabilities-when-executing-npm-install-4bio
I got a project from my client in which the backend is lumen and the front end is reactjs and I want to do the enhancements on that. Lumen gets installed successfully. But I failed to install the reactjs project. When I executed "npm install" inside the folder It shows 33 vulnerabilities. And when executing the npm audit I found that the issue is for the module "node-saas 5.0.0" First I executed the project in node 16.15.0 and after that realized that node-saas 5.0.0 is compatible with node 15 So I switched to node 15.0.0. Again it shows other vulnerabilities for the modules like jest and more. Every time I resolve the problem that getting from the npm audit, the new one arises. How can I solve this issue?
mayareactdev
1,206,461
‘Automation’ is Key in Accelerating Business Velocity through Digital Transformation: Arunava Paul [Testμ 2022]
You’ve probably seen and heard people gushing about the digital transition by this point. It is the...
0
2022-09-29T11:05:49
https://www.lambdatest.com/blog/accelerating-business-velocity-through-digital-transformation/
automation, testing, webdev, velocity
You’ve probably seen and heard people gushing about the digital transition by this point. It is the most popular buzzword that ranks within the top ten. Another competitor for first place is automation. But why is there a difference if both focus on increasing business operations’ efficiency? In this session of the Testμ Conference, [Arunava Paul](https://www.linkedin.com/in/arunavapaul/), Head of Innovation and COE, Qualizeal, teamed up with [Somesh Ojha](https://www.linkedin.com/in/someshojha/), Director of Sales, LambdaTest, to emphasize the role of automation in helping fast-paced businesses achieve digital transformation. {% youtube AjhQv2hOjAI %} In the field of Software Quality Engineering and Assurance, Arunava has 18 years of experience. He has a wealth of expertise managing test automation portfolios for major banks, working on prestigious BFS engagements across international borders, and managing Testing Services and Automation Centers of Excellence. His experience with automation spans many areas, including service virtualization, testing Microservices APIs, classic GUI automation, and mobile test automation. Additionally, he assists with consultation, gap analyses, due diligence, framework implementation, proposal presentations, automation evaluations, and transformation projects. He started the talk by highlighting the agenda of the meeting: **Agenda:** * Abstract. * What is digital transformation? * Automation is key in the world of digital transformation. * Various automation in details * Cost of Quality > “Quality is everyone’s responsibility” — William Edwards Deming. > Accelerating business velocity has become the new theme in the world of digital transformation and has resulted in a higher degree of automation awareness across the IT life cycle. The industry is striving toward accelerating the release life cycle through various automation initiatives and workforce transformation. The research says enterprises want to achieve 60 percent automation to address consumer needs. In this session, Arunava Paul discussed how we look into the new-age automation, tools, and key best practices. ## Digital transformation Digital transformation is a technology-driven process of radically changing a business to bring value to its customers and pave its way toward growth. Digital transformation is different for every organization, and it can be hard to pinpoint a definition that applies to all. In general, digital technology integration into all areas of business results in fundamental changes to how businesses operate and deliver value to customers. It also needs a culture change that requires the organization to understand and adopt a path of challenging the status quo, innovate more, and get faster feedback for better improvement. Arunava dives in further and explains the elements. Although Digital Transformation is different for every organization — Common Elements of Digital Transformation are: * **Customer Experience** — Bringing value to customers, understanding customer needs, and using AI to serve customers better. * **Operating Agility** — Increasing Operational agility and reducing operations costs through process automation. * **Culture and Leadership** — Creating a culture and environment that embraces the new digital changes. * **Workforce Enablement** — Training the workforce to prepare for transformation and empower them with tools and knowledge. ## Real-World Examples of Digital Transformation **Banking Sector:** The banking sector is the rising demand for Artificial Intelligence (AI), Blockchain, and the Internet of Things (IoT), modernizing the banking industry. * Omni channel banking. * Analytics-Driven Marketing. * Customer Service. * Credit decisioning. * Regulatory Compliance. **Insurance Sector:** * Claims Processing. * Claims fraud detection. * Analytics-Driven underwriting/Risk Scoring. * Reports Automation. **Telecom Sector:** * SC Network — The Network of Today and Tomorrow * Internet of Things and Telemetrics — enable better communication between people and devices **Healthcare Sector:** * Virtual Visits of telemedicine. * Omnichannel Patient Portals. * Blockchain and Medical records. * Voice systems and chatbots to screen patients and reduce the load for patient support staff. **Automotive Sector:** * Predictive maintenance using IoT. * Virtual showrooms and Video Screens for purchasing vehicles. * Autonomous driving / self-driving. * Alternative rules for environment and cost. ## Automation is key in the world of digital transformation He claims, *“Automation is a key to accelerate business velocity and to make Digital Transformation successful.”* He says the snapshot below is the complete view of today’s talk. ![](https://cdn-images-1.medium.com/max/3998/0*El9_DLaG4-0a_U8P.png) ## Customer Experience — Automation Areas 52% of all Internet traffic now comes from mobile, with desktop usage steadily declining. It’s projected that by 2025. 92% of internet use will be mobile-based. Mobile apps are no longer the answer. 61% of consumers will not download an app to communicate with a business. Responsive websites powered with AI, Data analytics, and chatbots are gaining pace. ![](https://cdn-images-1.medium.com/max/3998/0*b-g0d9QzFYoeXgob.png) * Apply AI to uncover processes automatically. Process discovery automation can increase the pace by 5x. * Chatbots can help CX by providing information quickly and reducing long calls waiting for customer care. * Businesses can gain insights with embedded real-time data analytics. ## Culture and Leadership — Automation Areas What role does culture play in digital transformation? Organizations need to follow a mindset transformation and support continuous innovation and R&D. Embracing this shift requires everyone in the company to rethink and adopt a culture of “Let’s build and create new capabilities that didn’t exist before.” ![](https://cdn-images-1.medium.com/max/3998/0*OZUJYqzbZL7wTyMI.png) * Facilitate teamwork by breaking down silos. * Adopt a model of Continuous Innovation. * Fail fast and learn from failures and improve. * Facilitate workforce transformation. * Help the marketing team transform themselves and be able to use new digital tools powered with AI & Data visualizations. ## Operational Agility — Automation Areas To bring Operation agility, process automation is the key to Digital Transformation. Automating day-to-day workflows lies at the heart of successful digital transformation. Without workflow automation, organizations are still dealing with manual processes, in case the threat of being disrupted and becoming irrelevant is real. ![](https://cdn-images-1.medium.com/max/3998/0*2RoTShZQ5tyzuqi3.png) * **Onboarding Automation**: Onboarding a single employee or a new customer and automating the onboarding process can profoundly impact ease and speed. * **HR Workflow Automation**: Streamlining HR procedures with workflow automation tools and document management software makes HR’s job easier and saves time. * **Finance Process Automation**: Automating finance processes improves cash flow and frees up the finance team’s time to allow them to work on critical analytical aspects. ## Workforce Transformation Workforce transformation is an important aspect of Digital Transformation and can help organizations accelerate transformation with rightly skilled teams. Workforce transformation refers to realigning a company’s employee base to ensure that its skills match its strategic needs. ![](https://cdn-images-1.medium.com/max/3998/0*ARZskuW_CkQSKNv6.png) * Increase team efficiency and accuracy through automation enablement so employees can focus on the tasks that matter. * More rewarding work leads to more satisfied and engaged employees. * Enable all teams to be able to leverage Digital tools and technologies. * Full Stack tester: Strong business knowledge, sound technical skills, and the right tool knowledge are prerequisites for an excellent full-stack tester. * A full stack tester does not know everything but someone who understands the mechanisms and can find the best test strategies and test approach to ensure quality is delivered. ## Technology Transformation — New age automation He further explains the New age of automation. You can have automation without digital transformation, but not the other way around. A principal cloud evangelist, Leon Godwin, says, “Digital transformation improves processes, while automation adds speed and reduces costs.” Ultimately digital transformation delivers more business value when combined with process efficiency, enabled by automation. ![](https://cdn-images-1.medium.com/max/3998/0*nbObpmSrn_1oDL9r.png) * Traditional UI/API/Mobile automation — That helps to validate technology changes faster and reduce release cycle time. * Shift left with API automation, BDD automation, mobile automation, code-less automation, and hyper automate. * Automated deployment/On-demand environment provisioning — Supports more frequent and faster deployments helping to reduce time. ## Cost of Quality Cost of Quality is a method for calculating the costs companies incur, ensuring that products meet quality standards and the cost of producing goods that fail to meet quality standards. ![](https://cdn-images-1.medium.com/max/3998/0*PkATt0Ms1_Vp7n7C.png) *Cost of Quality = Prevention cost + Appraisal cost + Internal failure + External Failure costs* * Testing Early and continuously testing can help us reduce the Cost of Quality. * Shift-Left with API automation helps teams to start automating early and getting early feedback reducing the Cost of Quality. * Testing should be everyone’s responsibility. * BDD methodology suggests scenarios should be created following the Three Amigos process involving Business Analyst, Developer, & QA. * The impact of the defects can measure the cost of defects and when we find them. Earlier, the defects’ cost was found to be lesser. For example, if a defect or issue is found in the requirement specifications during the requirements gathering and analysis phase, it is cheaper to fix it than if found in the later stages of SDLC. ![](https://cdn-images-1.medium.com/max/3998/0*tGgv2IBCArs1weCt.png) ## Statistics *Where can digital transformation make an impact?* He explains the statistics below: **Market Growth Rate** Direct digital transformation investment is expected to grow at a compound annual CADRE OF 18% from 2000 to 2003. By 2021, it is expected to approach $7 billion as companies build on existing strategies and investments, becoming digital-at-scale future enterprises (IDC). **Digital Tx Initiatives** 40% of all technology spending will go toward digital transformation, with enterprises spending more than $2 trillion in 2019 (IDG). Digital Tx is expected to add $100 trillion to the world economy by 2025 (World Economic Forum). **Emerging Technologies** At least 90% of a few enterprise apps will insert Al technology into their processes and products by 2025 (IDC). Intelligent systems will drive 70% of your customer engagements by 2025 (Gartner). ## Digital Transformation Benefits * Executives say the top benefits of digital transformation are improved operational efficiency (406), faster time to market (36%), and the ability to meet customer expectations (35%) (PTC). * 56% of CEOs say digital improvements have led to increased revenue. (Gartner) * Digitally mature companies are 23% more profitable than their less mature peers. (MIT). * High-tech B2B companies have reported a 10% to 20% cost reduction and revenue growth of 10% to 15% from transforming their customer experience process. * 74% of business buyers say they will pay more for better Digital Transformations. ## Best Practices to meet Digital Transformation What is the best way to scope, scale, and lead the digital transformation that can deliver financial results? Below is the Gartner IT Roadmap for Digital Transformation — based on unbiased research and interactions with thousands of organizations that have successfully implemented digital business transformation initiatives. * Ambition * Design * Delivery * Scale * Refine It was indeed an insightful session with Arunava. The session ended with a few questions asked by the attendees to Arunava. Here is the Q&A: **How will HyperAutomation affect the people who are more dependent on Manual workings?** **Arunava:** Automation is key. I explained my point of view that we could not support digital transformation without automation. We are doing digital transformation to fast-track the business to go to the market flawlessly and fast. We don’t want to slow down. So people who are more into the manual, need to support the organization’s view for digital transformation. To go to the market faster, we need to bring the keyway at the same level of speed. **Currently, codeless test automation is also rising. Is it reliable to use? What’s your opinion on it?** **Arunava:** Codeless test automation is rising, and tools support codeless automation very efficiently. But some other tools are still in a very initial stage and will probably gain market share in the future. In some projects, we do not have enough skilled resources who can drive the automation right. So there, we can think about codeless test automation, that can be plugged in to get a quick answer or a quick check about the application. Highly skilled automation engineers are probably the codeless automation engineers that can help in the future. Definitely, codeless automation will gain space, and it will be more beneficial to everyone. **How much has automation helped small businesses? Can you cite some examples?** **Arunava**: Instead of manually reviewing applications, we can use automation if a product needs to reach the market more quickly. Despite the fact that it is a tiny business, automation allows us to observe what is not operating as intended and what is not working at all. We must fail quickly. We cannot wait till something is broken for us to realise it. In order to help the company get into production more quickly, we must fail quickly and gather knowledge.
lambdatestteam
1,206,710
PyLadies Dublin August Meetup
Thanks to everyone who joined up on Tuesday evening, a big thanks to both our speakers, Doreen...
16,483
2022-09-29T15:16:18
https://dev.to/pyladiesdub/pyladies-dublin-august-meetup-411e
pyladies
![Thank you for coming](https://media.giphy.com/media/BYoRqTmcgzHcL9TCy1/giphy.gif) Thanks to everyone who joined up on Tuesday evening, a big thanks to both our speakers, Doreen Sacker and Glenn Strong. 🌈 Thanks to our community partners: * [PSF](https://www.python.org/psf/) (Meetup) * [PyLadies](https://pyladies.com/) * [Coding Grace](https://codinggrace.com) (StreamYard) ### August event details:- * ℹ️ Event page: https://www.meetup.com/pyladiesdublin/events/286761477/ * 🍿 Video: https://youtu.be/oU5n3ZiqR8A * 📢 Announcements: https://docs.google.com/presentation/d/e/2PACX-1vS4nEB17YmkypYHK74AET3f7PFDomPeFjlqGHd2uhAWfC3KyTizDJ3ubywnEuyADEsWKWmoaugK6cjr/pub?start=false&loop=false&delayms=3000 🤔 If you have any questions to either of our speakers, you can email us at dublin@pyladies.com and we will pass them on to the speakers and post their answers via comments on the meetup and video page. ## Talks ### Talk 1: Reproducible machine learning projects with DVC and Poetry by Doreen Sacker (20mins) Data Science teams live in the Python ecosystem, this means often using pip for package management and DIY solutions for data collaborations and training pipelines. The problem though is spending quite a bit of time on things that are annoying and unnecessary, such as package conflicts, missing data, and manual training processes. In this talk, I will tell you more about DVC and Poetry. Poetry resolves conflicts automatically, without you having to do it manually. DVC keeps track of assets used in projects, so we don’t have to. I want to share my experience with both tools, how it made it easier for us to work with shared data files, run training pipelines effortlessly, and let poetry handles all conflict resolution of packages. **About Doreen Sacker** I decided to become a data scientist while I was an intern at a big german E-Commerce company. There I met data scientists for the first time and saw the impact of their work. Algorithms repeat what they see, they are not objective but highly subjective and they at least partially represent views of the people that build them. I want to inspire women, queer people and everyone else underrepresented in tech to understand and have an impact on the code, the algorithms, and datasets that are being created. * https://www.linkedin.com/in/doreen-sacker * https://twitter.com/DodoSacker ### Talk 2: Bridging Scratch to Python with Pytch by Glenn Strong (20 mins) A really common entry into programming for kids is Scratch, MIT's block-based learning environment. It's great at giving users ways to build fun programs with graphics, sounds, and interaction. But when those users move on to Python they find they have to make a huge leap, often leaving behind a lot of the very fun things they have been doing to focus on text-output programs built with a very different approach. Their enjoyment and engagement can suffer as a result. We'd like to do something to help bridge that Scratch to Python gap and in this talk I'll introduce Pytch, our "Scratch-oriented programming in Python" environment. The idea is help learners write the kind of fun and interesting programs they are used to writing in Scratch while also starting to learn about programming in Python * https://www.pytch.org **About Glenn Strong** Glenn Strong is an Assistant Professor in Computer Science in Trinity College, Dublin where he has over 20 years experience as an educator and researcher. Recent projects include “OurKidsCode”, developing creative family coding workshops on a national scale, and “Pytch” creating a system to bridge Scratch and Python development, both funded by Science Foundation Ireland. Other research interests include Functional programming and Formal Methods. He has directed the M.Sc. in Interactive Digital Media, and chaired grassroots organisations supporting Free and Open Source Software. * https://www.scss.tcd.ie/Glenn.Strong --- If you want to give a talk, slots still free for October 18th from 18:30. [Update] This will be an in-person event and hosted by Honeywell. 👉 [Event details](https://www.meetup.com/pyladiesdublin/events/288753446) Submit your talk details to https://share-eu1.hsforms.com/1JaDd_XRCQFKb3cp0akbz8Af1bg5 Thanks and see you at the next event!
whykay
1,207,011
How to Create a To Do List by vanilla JavaScript
Creating a "to-do list" app is the one of essential ways to deepen your DOM manipulation knowledge. I...
0
2022-10-03T19:28:56
https://dev.to/hikari7/how-to-create-a-to-do-list-by-vanilla-javascript-31dn
Creating a "to-do list" app is the one of essential ways to deepen your DOM manipulation knowledge. I made it once before, however, it was a pretty redundant code so wasn't satisfied with my work at the time. However, I learned DOM essence again and how to use them effectively recently. So I'd like to share what I learned and hope it would be some ideas to create your own "to-do list" by vanilla JavaScript. ## Basic Steps These are basic steps to create apps by vanilla JS which you already might know😁 1. Add HTML 2. Add CSS 3. Add JavaScript! ## The Sample And here's the sample of the "to-do list"! {% codepen https://codepen.io/hikari7-the-scripter/pen/jOxzare %} ##HTML ```HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Note Manager</title> <script src="https://kit.fontawesome.com/d4e1c36ab6.js" crossorigin="anonymous" ></script> <link rel="stylesheet" href="style.css" /> </head> <body> <div class="wrapper"> <header> <h2 id="heading">To Do List</h2> </header> <div class="note-list"> <ul id="list"></ul> </div> <div class="add-note"> <form id="add"> <input type="text" placeholder="Add note..." id="add-input" /> <button type="submit" id="add-btn">Add Note</button> </form> </div> </div> <script src="exercise.js"></script> </body> </html> ``` I haven't added `<li>` and `<p>` inside of the `<div class="note-list">` which is the basic area of the note yet because they'll be implemented by DOM later on! So, the blueprint is important since it makes orgnaized to think which part will be added by JavaScript. Also, when it comes to adding `<form>`, I recommend making sure to think whether to mark it up with `type="submit"`! Because using an `<input type=submit>` in the form, which is what gets the automatic Enter Key behaviour so can be a good user-friendly experience. ##CSS Enjoy to write your codes for whatever your preferable design🎨 ```css * { margin: 0; padding: 0; } body { font-family: "Concert One", cursive; color: #333; } .wrapper { width: 90%; max-width: 760px; margin: 20px auto; border-radius: 3px; box-shadow: 2px 2px 3px rgba(0, 0, 0, 0.2); box-sizing: border-box; padding: 0 0 20px; border: 1px solid lightgray; } .changeBg { background-color: yellow; } .changeFt { font-style: italic; font-size: 40px; } input { color: #777; outline: none; border: 1px solid #777; } header { text-align: center; background: #ffd1d1; } header h2 { padding: 40px 0 20px 0; color: #555; } header #search-note input { padding: 5px 2px; width: 200px; border-radius: 2px; margin: 10px 0 40px 0; } .note-list ul { list-style: none; padding: 40px; } .note-list ul li { padding: 5px; margin-bottom: 10px; border-bottom: 0.1px solid #ccc; border-left: 5px solid #ffd1d1; } .note-list ul li p:nth-child(2) { text-align: right; } .note-list ul li p i { cursor: pointer; margin-left: 5px; } //the icon which is from font awesome .note-list ul li p i.fa-pencil-square-o { color: #228b22; } //the icon which is from font awesome .note-list ul li p i.fa-times { color: #dc143c; } .note-list ul li input { display: none; padding: 5px 0; width: 70%; margin: 5px auto 0 auto; } #add-notes { padding: 60px 0; } form#add { margin-top: 10px; text-align: center; } form#add input { border-radius: 2px; } form#add input[type="text"] { padding: 6px; width: 250px; } #add-btn { padding: 4px; border-radius: 2px; } ``` The edit icon and delete icon appear once the user inputs the note. So these are already supposed to be added in CSS and hidden and displayed by JS. ## JavaScript All right, let's start writing JavaScript then! Let's see the very basic flow of implementing DOM first. Basically, when you implement the DOM function, we will follow the below flow for the elements. **1. Select** To manipulate an element inside the DOM, you need to select it and store a reference to it inside a variable at first. ```javascript // Get the first <p> element: document.querySelector("p"); //Get the first element with class="example": document.querySelector(".example"); ``` **2. Manipulate** We can start to manipulate the elements using the properties and methods available. I'll introduce some are used in the "to-do list" below. **_style_** : returns the values of an element's style attribute. ```javascript element.style.backgroundColor = "red"; ``` **_document.createElement()_** : creates the HTML element specified by `tagName` ```javascript const newDiv = document.createElement("div"); ``` **_Element.className_** : gets and sets the value of the class attribute of the specified element. ```javascript secondIcon.className = "fa fa-times"; //(It was also used to add font awesome in this work) ``` **_Element.setAttribute()_** : Sets the value of an attribute on the specified element. ```javascript setAttribute(name, value) ``` **_Event.target_** : Returns the element that triggered the event. ```javascript alert(event.target); ``` **_Element appendChild()_** : Appends a node (element) as the last child of an element. ```javascript document.getElementById("myList1").appendChild(node); ``` _**removeChild()**_ : Remove the first element from a list ```javascript list.removeChild(list.firstElementChild); ``` **3. Events** Responding to user inputs and actions! Such as `The addEventListener()` method of the EventTarget interface sets up a function that will be called whenever the specified event is delivered to the target. ```javascript //1. Select const ul = document.querySelector("#list"); //3. Events document.getElementById("add-btn").addEventListener("click", function (e) { e.preventDefault(); const addInput = document.getElementById("add-input"); if (addInput.value !== "") { //2. Manipulate //create note elements const li = document.createElement("li"), firstP = document.createElement("p"), secondP = document.createElement("p"), firstIcon = document.createElement("i"), secondIcon = document.createElement("i"), input = document.createElement("input"); //create attributes firstIcon.className = "fa fa-pencil-square-o"; secondIcon.className = "fa fa-times"; input.className = "edit-note"; input.setAttribute("type", "text"); //add text to first paragraph firstP.textContent = addInput.value; //appending stage secondP.appendChild(firstIcon); secondP.appendChild(secondIcon); li.appendChild(firstP); li.appendChild(secondP); li.appendChild(input); ul.appendChild(li); addInput.value = ""; } }); // Editing and deconsting ul.addEventListener("click", function (e) { // console.log(this); // console.log(e.target.classList); if (e.target.classList[1] === "fa-pencil-square-o") { //<li>: the parent of the icon <p> const parentP = e.target.parentNode; parentP.style.display = "none"; //note const note = parentP.previousElementSibling; //<input>: the previousElementSibling of <li> const input = parentP.nextElementSibling; //show the block section input.style.display = "block"; input.value = note.textContent; input.addEventListener("keypress", function (e) { //e.keyCode === 13 represents "Enter" if (e.keyCode === 13) { if (input.value !== "") { //add the text which was input note.textContent = input.value; parentP.style.display = "block"; input.style.display = "none"; } } }); } if (e.target.classList[1] === "fa-times") { const list = e.target.parentNode.parentNode; list.parentNode.removeChild(list); } }); ``` ## Tips for creating a "to-do list by DOM" Lastly, I'd like to share some parts which I think it would be nice tips for creating a "to-do list". _**Programatic approach**_ Instead of the DOM, you can also wrtie the note section by InnerHTML like this: ```javascript `<li>${addInput.value}</li>` ``` Even if it's a redundant code, I guess basically creating elements through the DOM is easier to manipulate them eventually b because I also put the edit and delete functions smoothly for each element. _**How to reset the value**_ Once the user add the note, how can we reset the value? It's easy to solove this, just you can put an empty string. ```javascript addInput.value = ""; ``` _**How to add the editing part**_ 1. Select the edit icon ```javascript ul.addEventListener("click", function(e){ console.log(this) console.log(e.target.classList) } //"event.target" returns the value of where the user is clicking on. ``` The console result will be like below which means you have an object called DOMTokenList and has the class. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uliertb531swf7l1wvzq.png) So, you can access the `index[1]` through the `e.target` to choose the edit icon (the pencil icon) and set the condition to check if the user clicks that. And then put them inside of the `addEventLisetener`. 2. Manipulate inside of the `addEventLisetener` Let's manipulate the inside of the function which the user clicked the edit button then! The flow is like these: - Create the note section with invoking CSS - Set the `input.value` with the `textContent` - Add the event contents to show what the user keypressed I also added some comments on the above codes so please refer to them as well! ## Conclusion DOM manipulations are very important essense to understand JavaScript. I hope it will be helpful to understand the JavaScript libraries in the future🤗 Thank you so much for reading and happy coding!
hikari7
1,207,015
Day 9 I4G 10daysofcodechallenge
You are given an array of k linked-lists lists, each linked-list is sorted in ascending order. Merge...
0
2022-09-29T22:51:11
https://dev.to/abubakarismail/day-9-i4g-10daysofcodechallenge-1di6
javascript, leetcode
You are given an array of k linked-lists lists, each linked-list is sorted in ascending order. Merge all the linked-lists into one sorted linked-list and return it. Example 1: Input: lists = [[1,4,5],[1,3,4],[2,6]] Output: [1,1,2,3,4,4,5,6] Explanation: The linked-lists are: [ 1->4->5, 1->3->4, 2->6 ] merging them into one sorted list: 1->1->2->3->4->4->5->6 Problem Category Hard!!!, Omo The first idea I got was to merge the lists to a single list and the sort the whole merged lists, using a sorting algorithm, like merge sort. Think I think of complexity and how to simplify it. Which is why should I merge the list all. I then get the first list, combine the first with second link sorted, continuously until I reach the end of the list. Complexity is O(n+k) Space Complexity O(k) [Problem inn leetcode](https://leetcode.com/problems/merge-k-sorted-lists/) [Solution Implementation in js](https://github.com/ismaelsadeeq/I4G_coding_challenge/blob/main/mergeList.js)
abubakarismail
1,207,123
My first merge!
Hello everyone! Today, I tried both fast-forward and three-way recursive merges as I was...
20,323
2022-09-30T02:25:28
https://dev.to/gulyapulya/my-first-merge-1hok
opensource, beginners, git, github
##Hello everyone! Today, I tried both fast-forward and three-way recursive merges as I was working on two different additional features for my SSG command-line tool, SSGulnur. First feature that I was interested in implementing was adding [markdown bold and link syntax features support](https://github.com/gulyapulya/SSGulnur/issues/8). As always, I created an issue about my solution and a branch in which I worked on it. I also followed the same steps for my second feature which was adding [markdown heading syntax features support](https://github.com/gulyapulya/SSGulnur/issues/9). I worked on them one by one. So, after I had been satisfied with my new code in one branch, I would commit it with a descriptive name stating which issue would get fixed and then switch into another branch and do the same. At the end though I had to combine all of them together. Therefore, I went back to my main and started merging the new branches one by one. The first one went smoothly through a fast-forward merge, but the second one was a three-way recursive merge and required me to resolve the conflicts, as I worked on the same lines of code in both of them. I combined the code accepting both and committed the changes. When, I pushed my updates back into the GitHub, I could see that the issues were closed and also contained the commit references automatically for me, as I used `fixes #(issue number)` in my commit descriptions. For my first issue the commit number was [1d14563](https://github.com/gulyapulya/SSGulnur/commit/1d14563aae145d8b25dbd1e716ac4ef93571f255) and or my second issue it was [f966533](https://github.com/gulyapulya/SSGulnur/commit/f966533753cab49033655c78d1bb9b24b0a0fd0e). Probably, this ability to have many simultaneous branches is very useful when multiple developers are working on the same code, but it still helps to combine the code even if you are working alone and want to split some project implementations. Overall, great.
gulyapulya
1,207,300
Bonita online documentation needs some Hacktoberfest 2022 love!
What? Bonitasoft is taking part in Hacktoberfest, a month-long celebration of open source...
0
2022-10-05T08:01:14
https://community.bonitasoft.com/blog/bonita-online-documentation-needs-some-hacktoberfest-2022-love
hacktoberfest, contributorswanted, design, javascript
## What? Bonitasoft is taking part in **[Hacktoberfest](https://hacktoberfest.com/),** a month-long celebration of open source software, where developers are encouraged to - and rewarded for - contributing to open source projects like Bonita. In this friendly, worldwide event, maintainers are invited to guide would-be contributors towards issues that will help move their projects forward, and contributors get the opportunity to give back - to projects they like, and to others they've just discovered. This year, we propose work on the design of the [online Bonita documentation web site](https://documentation.bonitasoft.com/) 🔥. For example, there is an opportunity to improve the light and dark themes 🎨, improve the look and feel for mobile, and much more 😍. ## Who? **This project is open to everyone** and no contribution is too small — bug fixes and documentation updates are valid ways of participating ✨. ## How? It's easy! Register on [Hacktoberfest](https://hacktoberfest.com/) with your GitHub account, search for issues to contribute, and send at least 4 pull requests ... After the 4th validated PR, you win the Hacktoberfest 2022 👕 or a 🌲 is planted in your name (first 40,000 participants, so get in there early!). To contribute to the design (a.k.a _bonita-documentation-theme_) of the Bonita documentation site: * Check the [opened issues](https://github.com/bonitasoft/bonita-documentation-theme/issues?q=is%3Aissue+is%3Aopen+label%3Ahacktoberfest+) available for Hacktoberfest. * Find one that you are interested in and which is not already assigned to someone. * Post a comment to mention you are willing to work on this topic. We will acknowledge and will assign it to you to inform the world! * Work on a Pull Request. Ask any question if you are blocked or you need more details to complete your task. * Submit your Pull Request. We will review it quickly and will work with you to ensure it is merged and visible on the documentation site We are looking forward to your contributions 👋!
tbouffard
1,207,432
How Do Enterprise Applications Help Business Growth?
Business growth is imperative for businesses to succeed in today’s competitive market. To accelerate...
0
2022-09-30T11:20:27
https://dev.to/darshilwebclues_13/how-do-enterprise-applications-help-business-growth-p6p
entreprise, machinelearning, mobile, development
Business growth is imperative for businesses to succeed in today’s competitive market. To accelerate your business’s growth and expand your company's operations, you need an enterprise application that is feature-rich and highly compatible with your existing systems. In addition to being user-friendly, good enterprise software enhances the productivity of employees, streamlines operations, and reduces costs. Enterprise applications such as accounting software, CRM solution, ERP system, and supply chain management system have the potential to significantly improve business growth by optimizing operational efficiency in various departments. Read on to learn how you can implement one or more of these business solutions to accelerate your business growth and for the best [**Enterprise Application Development**](https://www.webcluesinfotech.com/hire-mobile-app-developers/). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fvd8lt5pmjjc33h07h8a.png) **Build A Robust Marketing & Analytics Platform** Marketing automation is one of the easiest ways to improve your business growth. A marketing automation platform can help you automate your lead nurturing and lead generation process, and can also help you analyze your campaign results and increase ROI. A marketing automation system allows you to send targeted emails, create content, send SMS, run online advertising campaigns, and even create a webinar; all with a single click. A marketing automation tool can help you create a consistent brand image and drive potential customers to your website or sales page. It can also help you measure the effectiveness of your marketing campaigns by providing you with analytics such as sales, leads generated, and average order value. If brand awareness is your top priority, you can create marketing campaigns that feature your products or service and their benefits. Alternatively, you can create campaigns that focus on the problems your products or service can solve. Once you have established your marketing strategy, develop a campaign timeline and schedule your marketing activities with a marketing automation tool. **Establish an Advanced HR Management System** A robust human resource management system can help you create an effective recruitment strategy, manage employee performance, conduct annual reviews, and even implement an on boarding process. A good HRMS also allows you to create employee contracts, manage benefits, and create a leave policy. As a business owner, you can create job descriptions, receive job applications from potential employees, hold interviews, hire employees, and facilitate other HR activities. In addition to hiring and on boarding employees, an advanced HRMS allows you to manage payroll, create pay scales, and generate payslips. It also allows you to create performance reviews, create employee appraisals, and maintain employee records. A good HRMS is compatible with various devices and offers an intuitive user interface. It allows you to manage employees from anywhere, and can also be integrated with enterprise applications such as accounting software, CRM solution, and ERP system. **Leverage The Benefits Of Collaboration Software** Collaboration software allows multiple users to work on documents, files, and images at the same time. It ensures that team members are up-to-date with the latest information and are aware of changes made by other users. It also allows users to communicate with each other and share ideas with their team members. Collaboration software allows you to create project schedules and assign tasks to different team members. It allows you to set up milestones, track project progress, and manage project tasks. Good collaboration software allows you to create a central repository where you can store documents and files. It also allows you to create subgroups and assign users to subgroups. Collaboration software can also be integrated with enterprise applications such as accounting software, CRM solution, and ERP system **Enhance Supply Chain Efficiency** If you are a B2B company that relies on suppliers, strengthen your relationship with them by adopting a supply chain management system. An SCM solution allows you to create purchase orders, track inventory, manage supplier information, and monitor your supply chain activities. A good SCM solution allows you to create work orders, schedule service calls, and track your employee’s work hours. An SCM solution allows you to create purchase orders, track inventory, manage supplier information, and monitor your supply chain activities. A good SCM solution allows you to create work orders, schedule service calls, and track your employee’s work hours. You can also integrate an SCM solution with other enterprise applications, such as CRM solutions, accounting software, and ERP system. **Install An Effective Electronic Communication Platform** Companies that have been transformed by the digital revolution know the value of collaboration and communication. A good electronic communication platform allows you to create discussion groups, send bulk emails, and manage your employees’ communication. Good communication software also allows you to create file shares and allows employees to create group chats, while enabling admins to monitor their activities. Communication software allows you to create polls, manage calendars, create event invitations, and receive push notifications. You can also integrate a communication software with other enterprise applications such as ERP systems, accounting software, CRM solution, and SCM solution. **Summing up** An enterprise application can help your business streamline operational processes and boost productivity. A good enterprise application can help you improve your business growth by improving operational efficiency. To accelerate your business’s growth, you can implement one or more of these business solutions. When you implement enterprise applications in your company, it is important to choose the right software for your needs and the right [**mobile app development company**](https://www.webcluesinfotech.com/hire-mobile-app-developers/) like **WebClues Infotech**! The experts at WebClues would be happy to assist your business with the best Enterprise software development solutions.
darshilwebclues_13
1,207,451
Sequelize ORM with NodeJS
Introduction Sequelize is a promise based ORM (Object Relational Mapper) for NodeJS. We...
0
2022-09-30T16:34:23
https://dev.to/rutikakhaire/sequelize-orm-with-nodejs-3ogf
## Introduction Sequelize is a promise based ORM (Object Relational Mapper) for NodeJS. We can use multiple databases with Sequelize like Oracle, Postgres, MySQL, MariaDB, SQLite and SQL Server, and more. **What does ORM actually mean?** An Object Relational Mapper represents the database records as objects. It lets you create and manipulate data from a database using an object oriented paradigm. So, using Sequelize, you can perform DML operations like **SELECT, INSERT, UPDATE, DELETE** etc. using class methods. You can also define relationships on your database tables using class methods like **hasOne()**, **belongsTo()** and **hasMany()**, etc. So, now let's get started --- ## Create a NodeJS application Create a new folder at your desired location and initialize this as your node.js app using below command `npm init` Keep pressing the enter key after adding the required information and your node.js app is ready. Now install all the required dependencies using below command `npm install express mysql2 cors sequelize - save` The package.json file will look like below after you have all the dependencies successfully installed. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38bxxdf1nh7qy05uj3rg.png) The next step is to create a new express web server. Add a filename.js file at the root of your folder and add below code. ``` const cors = require("cors"); const express = require("express"); const app = express(); var corsOptions = { origin: "http://localhost:8081" }; app.use(cors(corsOptions)); // parse requests of content-type - application/json app.use(express.json()); // parse requests of content-type - application/x-www-form-urlencoded app.use(express.urlencoded({ extended: true })); // simple route app.get("/", (req, res) => { res.json({ message: "Welcome to NodeJs App!!!" }); }); // set port, listen for requests const PORT = process.env.PORT || 8080; app.listen(PORT, () => { console.log(`Server is up and running on port ${PORT}.`); }); ``` Use the following command to execute the server `node filename.js` You will get the following message ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vrpuglex1t3c059zumnw.png) Now, if you go to the browser and type the URL -> [http://localhost:8080/] you can see the application is up and running ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uuuhadm9qzjzhnwe4cnn.png) **Create a database** You can go to MSSQL Server and create a new database. In my case, I created it on Microsoft Azure. The creation of tables can be done with the help of Sequelize. Next step is to put all the database configurations in a file. So, I have created a config.js file as below. `module.exports = { HOST: "localhost", USER: "root", PASSWORD: "", DB: "student_db", dialect: "mysql", pool: {//pool configuration max: 5,//maximum number of connection in pool min: 0,//minimum number of connection in pool acquire: 30000,//maximum time in ms that pool will try to get connection before throwing error idle: 10000//maximum time in ms, that a connection can be idle before being released } };` **Initialize Sequelize** Create a new folder called models in the root directory and add a new file called index.js. Add below code there. ``` const dbConfig = require(“../config/config.js:); const Sequelize = require(“Sequelize”); const Sequelize = new Sequelize(dbCofig.DB, dbConfig.USER, dbConfig.PASSWORD, { host: dbConfig.HOST, dialect: dbConfig.dialect, operationsAliases: false, pool: { max: dbConfig.pool.max, min: dbConfig.pool.min, acquire: dbConfig.pool.acquire, idle: dbConfig.pool.idle } }; const db = {}; db.Sequelize = Sequelize; db.sequelize = sequelize; db.student= require(“./student.js”) (sequelize, Sequelize); module.exports = db; The user should not forget to summon the sync() method in the server.js. const app = express(); app.use(....); const db = require(“./models”); db.sequelize.sync(); ``` When you need to drop the existing tables and the database is required to be resynchronized, enter the force: true code like the below: ``` db.sequelize.sync({force: true}).then(() => { console.log(“Drop and resync db.”); }); ``` We now need to create a new model named student.js ``` module.exports = (sequelize, Sequelize) => { const Student = sequelize.define("student", { name: { type: Sequelize.STRING }, admission:{ type:Sequelize.INTEGER }, class: { type: Sequelize.INTEGER }, city: { type: Sequelize.STRING } }); return Student; }; ``` **Creating Controller** Below is the code for a controller. ``` const db = require(“../models”); // models path depends on your structure const Student= db.student; exports.create = (req, res) => { // Validating the request if (!req.body.title) { res.status(400).send ({ message: “Content can be placed here!” }); return; } // Creating a Student const student = { name: req.body.name, admission: req.body.admission, class: req.body.class, city: req.body.city }; // Saving the Student in the database Student .create(student). then(data => { res.send(data); }) .catch(err => { res.status(500).send ({ Message: err.message || “Some errors will occur when creating a student” }); }); }; ``` **Retrieving Data** You can use below code to retrieve data. ``` exports.findAll = (req, res) => { const name = req.query.name; var condition = name ? { name: { [Op.like]: `%${name}%` } } : null; Student.findAll({ where: condition }) .then(data => { res.send(data); }) .catch(err => { res.status(500).send({ message: err.message || "Some error occurred while retrieving data." }); }); }; ``` Now that we are done adding the controller and model, we need to have a route defined in our application that will execute the controller. Let's go ahead and create a route. **Defining Route** You can create a new folder called _routes_ and add a new route.js file in it. Add below code in that file. ``` module.exports = app => { const students = require("../controllers/student.js"); var router = require("express").Router(); // add new student router.post("/", students.create); // view all students router.get("/", students.findAll); }; ``` Now include the route in the student_server.js file using below code `require("./routes/routes.js")(app);` You can test the API by calling the routes in postman. Thank you for reading.
rutikakhaire
1,207,470
Are you an absent manager or a micromanager? 10 red flags to watch out for and tips for finding a middle ground
In my career path as an engineer and then - as an operations manager for a tech company, I’ve come to...
0
2022-09-30T12:54:10
https://dev.to/yhwang95/are-you-an-absent-manager-or-a-micromanager-10-red-flags-to-watch-out-for-and-tips-for-finding-a-middle-ground-17ih
management, leadership, career, productivity
In my career path as an engineer and then - as an operations manager for a tech company, I’ve come to believe that there are two worst types of managers - those who don’t care at all and those who do too much. I’ve noticed that the shift to remote work made both lines of behavior even more pronounced and detrimental to employees. The first case is the one where managers don’t check in on their team at all. They show no genuine interest in people’s work and are slow in responding to red flags. At first, employees might enjoy the freedom of lackluster management - no one is standing over you to check if you are writing code or scrolling through Reddit. The work is so scarce you can wrap it up in five-ten hours a week, and no one is asking for more. That typically holds up until the C-Suite notice that a lackluster manager’s department is underperforming, at which point, the entire team can be disposed of. Since there are so many absentee managers in remote workplaces, I am sure most people here had at least one leader in this category. Still, I’ll illustrate this behavior with a short [example provided by a Redditor](https://www.reddit.com/r/work/comments/rarizk/what_is_your_strategy_for_working_under_an_absent/): _“I have an 'absentee boss' - genuinely nice guy, but he mostly works remotely and we're lucky to see him in the office 2-3 times a month. He's largely disengaged and has a go-along-to-get-along management style. As a result, it's pretty much the inmates running the asylum these days, bullying/harassment goes unchecked, no plan, no focus, status quo - no news is good news, but you'll hear from him if someone has a complaint over some petty slight. E-mails, IMs, and calls don't get responded to unless you pester him relentlessly - and when he is in the office, the door is closed as he's in meetings most of the day. The rare time you do get his attention, his mind is in 20 different places.”_ The second type of manager is a micromanagement nightmare. Such leaders are absorbed in productivity to the point of not accepting employees at their low and not intervening when burnout is creeping up. On their teams, people are scared to show a moment of weakness and make a mistake. After a while, pent-up stress and exhaustion drive employees to a dead end. Some leave (a fitting expression would be GTFO) the toxic workplace, others might end up exiting the workforce or switching industries. Letting highly skilled professionals perish because of micromanagement is a loss to organizations and to the economy at large. In my opinion, this [story a software developer shared on Reddit](https://www.reddit.com/r/askmanagers/comments/c1vvtz/i_dont_understand_the_purpose_of_micromanaging_is/) shows how damaging micromanagement is to a confidence of a top performer. I didn’t post the full story (you can read it here) but the general picture comes across. _“The boss I'm currently working for asks tons of confusing questions. The boss also has an uncanny memory, perhaps even perfect recall. Boss can recall word-for-word conversations from months ago. YEARS ago. And can remember large tree directories without even glancing at the interface. It's kind of spooky… So boss and I have regular tag-ups. They are intense. I have to give a summary of my week, sometimes my day. A lot of times, it feels like a confessional, but it normally feels like an interrogation. I don't think I'm special. Boss does this to everyone. We've lost some really talented people over it, but everyone is scared to speak up because it's really just not worth the outcome.”_ In both cases, the overwhelming majority believe it’s better to leave both a company with an absentee manager and one with a controlling leader. So, for leaders, the question is: how do you strike a balance between disinterest and shattering your fist around the team? Now that remote work and hybrid work are growing in popularity, finding the middle ground is getting harder by the day. What used to be seamless in the office (for example, daily catch-ups) quickly verges on the edge of over-bearing when done in Slack or video conference. That’s why managers have to be all the more careful about not being overbearing while showing genuine interest and involvement in their team’s work. Based on my working history as a programmer and a manager, I outlined 10 aspects that define both absentee managers and micromanagers. I will examine extreme behaviors in each of these and explore the ways to find a middle ground. ##1. Reporting **Absent manager**: typically has no process for reporting. Such leaders can sporadically ask subordinates about project status updates but are easily satisfied with whatever answer they get. Absent managers have no desire to track month-on-month progress, pinpoint bottlenecks, and encourage their subordinates to proactively seek out ways for improvement. **Micromanager**: these leaders need everything to be reported. They have a policy of “If it wasn’t reported, it hasn’t happened”. As the result, the rest of the team feels like they spend more time writing reports than doing the work they are paid for. For micromanagers, reports are not necessarily limited to updates on ongoing tasks but might include all interactions - they would want to know who and when teammates have talked to and have a detailed record of what was discussed. #2. Communication **Absent manager:** often prefers asynchronous communication over in-person interactions. Such leaders often take a lot of time to answer and have to be continuously pinged for a reply. Absentee managers also tend to have a reactive approach to workplace communication - they rarely show initiative and intervene only when it’s time to put out fires. **Micromanager:** expects employees to be always on and report whenever they are offline. Even the slightest delay in response triggers a micromanager, especially if the team works remotely. To make sure teammates are always available, micromanagers stack their reports’ days with meetings and catch-up calls, leaving people no time to focus on work. #3. Trust **Absent manager**: on first glance, it would seem that a manager’s lack of interest is proof of trust in the team. However, it is often a display of indifference - absentee leaders do not really care if the team is underperforming or putting out subpar projects until their managers or clients call them out. **Micromanager:** doesn’t trust anyone, least so in a remote environment. In a micromanager’s mind, everyone on the team can do a better job but has an inherent tendency to slack off. In extreme cases, the lack of trust gives rise to questionable employee monitoring practices (often unconsented). #4. Discussions **Absent manager**: generally is hesitant to start discussions because they would require such a manager to take up extra responsibilities. On the rare occasions such a manager gets together with the team, there’s little willingness to show proactivity. A disengaged leader typically goes with the flow, hoping that the rest of the team reaches consensus independently. In discussions, an absent manager has no filter to separate ideas that are and are not worth pursuing. When wrapping up the meeting, such a leader will often try to make it look like everything went incredibly well even though no action plan was created. **Micromanager**: will often simulate discussions to get the entire team together and have a sense of control. However, micromanagers are extremely reluctant to accept ideas that interfere with their way of doing things and typically listen to what they want to hear, tuning out out-of-the-box suggestions. Since having the last word in discussions is crucial to controlling leaders, they would flip out when employees criticize them openly and turn ideas down for obscure reasons like “This is not the way things are done in this company” or “I have more experience thus we should do as I say”. #5. Delegating **Absent manager**: typically follows one of the two patterns. Such a manager can either be too engaged in IC work that leaves no time for communicating with reports or have not enough skills to understand and contribute to ongoing projects. **Micromanager** follows a similar trajectory. For some, micromanagement is a way to mask their own incompetence. They feel like, by controlling and gaslighting others, they shift the focus from their lacking skillsets. Others are high performers who expect everyone to give their best at work. They are generally used to seeing their reports doing subpar work and tend to redo the tasks employees submit. As the result, such a manager is always busy and frustrated with the team. “If I could, I would clone myself and do all the work properly” - such a manager thinks. #6. Focus on operation vs strategy **Absent managers**: these are often visionaries with little willingness to focus on details and plan operations. They see the picture so big that individual details are blurred out and seem insignificant. However, the devil is often in details, and errors in minute tasks can nip promising ideas in the bud. **Micromanagers**: as was the case for a leader in the Reddit story referred to above, micromanagers often have excellent attention to detail (though it doesn’t necessarily have to be perfect recall). The problem is they are so absorbed in operations that the bigger picture is lost. What’s worse, micromanagers are often stuck in their ways and can’t reconcile with the notion that, through automation or a creative approach, some tasks can take less effort while others can be discarded with no impact on the end product. Heavy focus on the “how” of a task: the look of the code, the organization of the codebase, the timelines of releases - rather than the “why” - creating a product that delights the end-user - is a common behavior pattern in this manager category. #7. Balancing the good cop and the bad cop Absent managers often take the approach Kim Scott, author of [“Radical Candor”](https://www.radicalcandor.com/the-book/) adopted at the beginning of his career in management: _“In an effort to create a positive, stress-free environment, I sidestepped a difficult but necessary part of being a boss - telling people clearly and directly when their work is not good enough”. _ An employee who hears only praise and no constructive criticism should start suspecting his manager of disengagement. No one is perfect, so if every outcome your employees produce seems amazing, you either don’t know any better or don’t want to risk a confrontation because handling it is too much work. Unfortunately, outside of the workplace, disengaged leaders often have excellent relationships with their reports making it harder for employees to call out the shortcomings of their supervisors. **Micromanagers**, on the other hand, go big on the stick and completely disregard the carrot. They are quick to point out failures but slow to praise successes. At their worst, micromanagers have no tolerance for errors - they want teammates to “go big or go home” which is a bar too high to be humanly possible. #8. Giving teammates control **Absent managers** expect employees to be fully in charge of their tasks. They would emphasize the importance of being a self-starter and taking the initiative and will offer teammates no feedback on their ideas or a second opinion when it’s needed. More often than not, disinterested managers have no organizational structure and have no idea what they expect from someone in a specific role. In such teams, employees are used to waking up to a blank workday, scrambling for tasks to put on their to-do lists, and battling the impostor syndrome. **Micromanagers**, on the other hand, give employees zero control over their routines. From the number of tasks to the deadlines and priority - everything is decided by the manager. Employees feel trapped by their roles, so rigid that they fail to accommodate changing priorities, desire for career growth, or occasional plateaus. #9. Dealing with failure **Absent managers** sweep failures under the rug to protect their well-being. Even when employees are alarmed and point out red flags, managers overcompensate for incompetence or inertia by trying to instill false optimism. With their concerns ignored or not taken seriously, subordinates can feel helpless and lose trust in their leaders for not noticing challenges and risks. **Micromanagers** often deal with failures by scapegoating their reports. Since they think everyone is underperforming, it follows that every bad news is someone’s fault. As the result, teams don’t want to let their leaders know about bad news because they don’t want to see the havoc a manager will undoubtedly wreak. #10. Leading by example **Absent managers** are rarely seen in action, so it’s common for subordinates to question the skillsets of their leaders. Disengaged leaders are usually removed from their organizations and show no passion for the company’s product, strategy, or mission. **Micromanagers** are often the managers who are micromanaging themselves. Buried neck deep in tasks, they feel overwhelmed by all too many to-do lists and are driving themselves into a crisis. When they become managers, overworked employees are at risk of micromanaging their teams because they don’t know a better way to do things. To them, working means working a lot and they will have a hard time trusting teammates who are not putting in long hours. #How to find a middle ground: tips for managing a remote team: Micromanagement, as well as lack of interest in the team, is not exclusive to remote teams: it’s just as common at the office. However, the transition to remote and hybrid models exacerbated management challenges across both extremes - recent Microsoft survey data shows that 85% of leaders struggle to trust in the productivity of their employees. How should leaders approach manage remote or hybrid operations? Here are a few practices we adopted at oVice and find life-saving in managing an over 100-people international team: - **Create a space for communication even in a remote workplace.** One of the biggest challenges remote teams face is the inability to quickly reach out to someone with a quick question or ask teammates for updates. Heavy reliance on asynchronous communication leads to people losing track of their discussions, bottled-up issues, and stalled projects. On the other hand, video conferencing isn’t the best solution for synchronous communication, especially when data shows teams are already swarmed with meetings. For us, [oVice](https://ovice.in/?utm_source=devto&utm_medium=guestpost&utm_campaign=micromanagement_gp), the platform we built for internal and client use, was a way to create a space where people can communicate in real-time or work side-by-side without feeling the pressure of being on camera eight hours a day. The ability to quickly connect with colleagues gave my team the ability to instantly solve problems and speed up project completion. We also saw significant improvement in engagement and retention: a virtual office space helped interconnect teams and streamline interactions between departments. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r2dzxbey484osk91op0h.jpg) - **Focus on task-based, not time-based performance tracking**. Trying to see how much time people spend at their desks in a remote environment is a fool’s errand. To begin with, time spent at the workstation doesn’t equal productivity - haven’t we seen people surfing the web during their office hours? That’s why I believe that the future is not in time tracking but in task-based progress tracking. The tricky part is in accurately estimating which workload is manageable for an employee and overloading your team - but you can learn where to set the bar by connecting with other players in your industry or conducting monthly employee surveys. - **Make it easier for people to ask you for help**. A manager who drops by an employee’s desk every 30 minutes with a “How can I help you” will come across as annoying and irritating. On the other hand, if you remove yourself from the team, you will never know when people need help and will be surprised to see that there was no project progress. For me, the middle ground is to step in when people need me but make sure it’s easy for them to reach out. For one, during my working hours, I log into my virtual office space to make sure my teammates can come by and ask a question. Other than that, I have a Calendly page teammates use to book 30-minute appointments. Although micromanagement seems to be highly popular, I am yet to see managers who admit to going too hard on their teams. Similarly, I’ve seen few leaders recognize they don’t give their teams enough attention. But, as they say, the first step to fixing a problem is admitting you have one. So, if you spotted some of the red flags listed above in your behavior, don’t beat yourself up and start looking for ways to either build up employee trust through setting realistic expectations and refraining from policing your team or increase engagement by creating more opportunities to connect with your subordinates. To see how a virtual office space like oVice can help you achieve both goals, explore our [case studies](https://resources.ovice.in/use-cases/) or visit the [tour space](https://tour-en.ovice.in/) (you can probably catch me there as well).
yhwang95
1,207,486
What I’ve Learnt While Working on My Second Flutter App: Numb
The Skills and Experience I Gained While Working On My Second App Hi, it’s Michael. Over...
0
2022-10-10T09:40:41
https://dev.to/devshogun/what-ive-learnt-while-working-on-my-second-flutter-app-numb-g2i
beginnercoding, flutter, reactnative, appdevelopment
--- title: What I’ve Learnt While Working on My Second Flutter App: Numb published: true date: 2022-09-24 17:02:21 UTC tags: beginnercoding,flutter,reactnative,appdevelopment canonical_url: --- #### The Skills and Experience I Gained While Working On My Second App Hi, it’s Michael. Over the past couple weeks starting from the beginning of September and today being the 23rd of September, I’ve been working on an app called Numb. You could say that this app is a sort of Numi clone for mobile. For those of you who don’t know what Numi is, it’s a calculator app on MacOS. You might be saying to yourself “Ok… what’s so special about a calculator app?” well Numi is more than just a normal calculator app. It allows you to do conversion, assign variables and many other things. Here’s a quick photo of what it looks like ![](https://cdn-images-1.medium.com/max/1024/1*bEYfwBVHRoBDGXJjxLqXDA.png) Looks pretty cool huh? I found out about this from a fellow developer named [Takuya Matsuyama](https://twitter.com/inkdrop_app), for some of you that name might ring a bell. Numb is going to be a mobile version of this, developing a calculator in this way will be taking the mundane beginner developer project of the calculator and turn it into a broader spanning project that challenges both beginner to experienced developers. #### TL;DR 1. I chose to make **Numb** in order to **challenge** myself and **develop** my **skills** **in mobile** **app development** , especially since I’m able to develop for iOS and Android now. 2. I chose **Flutter** to build this since it makes development **quicker** , **easier** and is more **performant** than other cross platform frameworks. 3. I gained a better understanding of **Regex** , it’s **use cases** and how **important** it is in a lot of software we take for granted 4. **Numb** will be **open-source** until I come to a conclusion on **whether to monetize** (subscription model), **sell** or **make it free**. ### Why Make A Calculator App Of All Things? I chose to make Numb because I saw it as a way to challenge my skills, experience and perception on some concepts. Earlier in my developer journey I skipped the basic calculator app that a lot of us developers make as our first, second or third project. And now it’s coming back to bite me in the ass 😂. One afternoon while thinking about a project that would look good in my resume I started thinking about apps I use everyday or at least once every 2 days, and it came to me that I’ve been using Numi quite often lately. So I did some digging and I realized that there was no app like this on mobile, I was only able to find one similar in concept but nothing else really. It wasn’t all that different from a basic calculator. That is when I decided to make one that I can carry around in my pocket. I saw it as an opportunity to build not just for myself but for a lot of people out there who are looking for a quick calculator app without all the headaches of having to tilt and look for operands and hope that the syntax is right. ### Why I Chose Flutter Over React Native Over the past 2 years I’ve been dabbling in Flutter and React Native off and on. With my first app([Twitwall](https://play.google.com/store/apps/details?id=com.essiet.twitwall)) being released in February 2021, I gained experience from a plethora of challenges and situations that I needed to figure out myself. My first app [Twitwall](https://play.google.com/store/apps/details?id=com.essiet.twitwall) was built using Flutter, I started developing it in December of 2020 after coming across Flutter for the first time. This would serve to be the driving factor that made me switch from React Native to Flutter. While using React Native for the past few years I noticed that the community never seemed to be in consensus for anything. When googling for quick guides on how to do something or on how to implement a particular library/feature, the things I came across were either outdated, completely wrong or meant for a specific version of Expo/React Native. For instance, I remember looking for a quick way to implement a dismissible list item in react native since it was removed from React Native in a previous update. Luckily I came across a lot of guides, tutorials and articles on how to do so but they all went about it in a roundabout way — I mentioned this in one of my [previous articles](https://devshogun.medium.com/flutter-vs-react-native-a-cross-platform-framework-vs-react-ported-for-mobile-5f1f256f7306). I initially started Numb with React Native, I had reached the point of developing the calculation and conversion engine. However, something at the back of my head said that I should develop the frontend a bit first, so I did and it didn’t go as planned. I had to make use of a lot of libraries that interact with the storage, databases and allow theming respectively. And these are the packages I chose: [MMKV](https://github.com/mrousavy/react-native-mmkv) for storage; [React Native SQLite Storage](https://www.npmjs.com/package/react-native-sqlite-storage) for the database; [NativeBase](https://docs.nativebase.io/?utm_source=HomePage&utm_medium=Hero_Fold&utm_campaign=NativeBase_3) for theming. Installing these alone was a pain especially the database 🤦‍♂️, I had to install third-party libraries just to get it working. After all the installing and time wasted I still chose to stick with React Native, the last straw came after I had implemented some of the designs I had for the UI and they just didn’t look right at all — NativeBase was anything but Native looking, especially with their bottom modal sheet. That’s when I decided that I’ll go with Flutter in the long run. With the performance gains, native feel and widget catalog I soon realized that I made the right choice. Installing the database([SQFlite](https://pub.dev/packages/sqflite)) and the storage package([Get\_Storage](https://pub.dev/packages/get_storage)) literally took me less than 2 mins. You might be wondering “What about a theming library?” well with Flutter and in fact Dart you can make use of the ChangeNotifier class and implement theming yourself fairly easily. I wrote an article on this as well recently: [Quick and Simple Way To Add Theming To Any Flutter App](https://blog.devgenius.io/quick-and-simple-way-to-add-theming-to-any-flutter-app-826c16a53e19) With all of that sorted I decided to deal with the engines before moving to the frontend aspect. Now I admit implementing the Regex, logic and structures was quite difficult especially since I took such a long break from Dart as a whole but I got rid of that ring rust fairly quickly roughly within a week. After running several tests on the engine I was shocked at how fast it was. From decomposing, filtering, compiling and calculating, the results were instant. Moving on to the frontend implementing the designs I had come up with were fairly easy, no need for any external libraries when it came to the frontend. All in all I chose Flutter because it was faster, easier and has little overhead in terms of widgets. ### How My Perspective On Regex Changed Earlier in my developer journey I saw regex as a tool that frontend devs will hardly ever use on the job, boy was I wrong 😂. Over the past few weeks just working on this project alone, I’ve found myself on [leetcode](https://leetcode.com/problemset/algorithms/) more times than not. In order to make sure that I’m implementing properly optimized algorithms for the engine I practiced a few [leetcode](https://leetcode.com/problemset/algorithms/) algorithmic questions here and there, about 2 a day. But, I find myself using a more declarative approach while programming, I try not to reinvent the wheel unless absolutely necessary. However, in this project while making the engines, I had to do a lot of imperative programming especially since I had to deal with a lot of edge cases. Regex is the backbone of the engines, you might think that it is some trained AI model that works with word and number recognition, but nope, it’s just good old regex. Arguably regex is one of the most powerful tools backend developers have at their disposal. ![](https://cdn-images-1.medium.com/max/1024/1*EfVNq9Iby7RLBJyWmtR5Zg.png) _A look at the basic parsing algorithm_ Regex saved me a lot of time while working on the engines, especially since I don’t have to go over every character in the input. ### To Open-Source, To Sell or To Monetize? That Is The Question As a developer still striving to get into the market and get a decent job, I don’t think that I’m in a position where I can make something that has taken so much of my time open-source and free in the long run. I’ve been deliberating over this issue for the past few days now, as I am an advocate for open-source software. I have considered going with Numi’s monetization model and limiting the free version of the app to only having half the functionality and no cloud synchronization. I have also considered selling this app to any company or individual willing to buy it for a good enough price as it is something that can be easily built upon to have high utility. At the end of the day money keeps the lights on, and food on the table, not advocacy or how much you contribute to the community. However, I later came to a decision to leave it open-source until I am either able to find a buyer or implement the monetization model. If you are interested in buying the app, supporting its development or helping out with marketing in order for it to be a success, you can reach out to me on [Twitter](https://twitter.com/devshogun), [Reddit](https://www.reddit.com/user/Shogun-2077) or email me on my public email (emsaa2002@gmail.com). ### Quick Look At The Screens So Far ![](https://cdn-images-1.medium.com/max/828/1*v_vmxNG8WtGlpoZarAWxPw.jpeg) ![](https://cdn-images-1.medium.com/max/828/1*tBRpDWS3hUIvRjpxIUUiqg.jpeg) ![](https://cdn-images-1.medium.com/max/390/1*dspy3N7FJPOF59ltdsXP7Q.png) _From left to right: main screen, theming and nav modal, and help/instructions screen_ I hope you enjoyed this article and I hoped that it helped you in at least one way. For the next few weeks I will not be releasing any articles since I will be focusing on helping out with the new Solidjs documentation and trying to make it into the fellows program, but I will be coming back towards the end of October with an update. I hope you had a great time reading this article and I hope you have an even greater day, bye for now. #### **Follow me online for more frequent updates** - Twitter: [Michael.E (@devshogun) / Twitter](https://twitter.com/devshogun) - Reddit: [Devshogun (u/Shogun-2077) — Reddit](https://www.reddit.com/user/Shogun-2077) - Medium: [Michael Essiet — Medium](https://devshogun.medium.com/) - Dev.to: [Michael Essiet — DEV Community 👩‍💻👨‍💻](https://dev.to/devshogun) * * *
devshogun
1,207,665
Content SaaS | Integrations Library - Markdown Editor as a UI Extension
Develop a Markdown editor UI extension and add it as a custom integration to your Bloomreach Content...
20,160
2022-10-17T09:23:05
https://dev.to/bloomreach/content-saas-integrations-library-markdown-editor-as-a-ui-extension-1fc6
bloomreach, saas, integrations, markdown
Develop a Markdown editor UI extension and add it as a custom integration to your Bloomreach Content environment. Recently Bloomreach Content (SaaS) released a feature, [Integrations Library - UI Extensions](https://documentation.bloomreach.com/content/docs/integrations-library?_ga=2.2576438.729086942.1664529358-129010115.1663337065). By using this feature, you can add your document field extensions as a custom integration. This opens up new possibilities to create content-type fields of your own. Here is how it works: - **A UI Extension (Custom Integration) application is loaded as an iframe inside of a field of a content type in the Experience manager.** - **The UI Extension application uses the [UI Extension Client Library](https://documentation.bloomreach.com/content/docs/ui-extension-client-library?_ga=2.183588000.729086942.1664529358-129010115.1663337065) to communicate between the application and the Experience manager - primarily to save and read field values in the CMS.** - **The UI Extension application can be built in any well-known frontend framework e.g., React, Angular, Vue, or Plain JS, as long as it includes the Client Library.** - **Additional configuration and context (such as CMS user or locale) can be passed along the UI Extension application.** For this blog, I’m going to create a new UI extension that will allow editors to edit in Markdown for markup. Markdown markup is nowadays used frequently with mobile native applications. I recently created an integration with Bloomreach Content and Flutter with the [Flutter SDK](link flutter blogpost once up). This markdown editor is complementary to the Flutter SDK. The markdown editor itself is an integration with https://stackedit.io/, which is, in my opinion, a very well-constructed and easy-to-use markdown editor. ## Frontend Project Firstly, create the UI Extension Application as a new frontend project. I’m very familiar with React, so I’ll create a simple React project: ``` yarn create react-app markdown-ui-extension ``` Install the UI Extension Client Library: ``` yarn add @bloomreach/ui-extension ``` ## Code the Plugin Register the UI extension: ``` const ui = await UiExtension.register(); ``` Get the current field value: ``` const brDocument = await ui.document.get(); const value = await ui.document.field.getValue(); ``` Set a field value: ``` ui.document.field.setValue(‘new value’); ``` The source code of the markdown editor can be found at: https://github.com/ksalic/markdown-field/ ## Deploy the Plugin The Markdown editor plugin is deployed on: https://markdown-field.bloomreach.works/ ## Use the Integrations UI to Register the Extension As a developer, log in to Bloomreach Content and navigate to _Setup > Integrations_: ![Setup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otywfjm4dh9f03tbkmvz.png) Add a new “Custom Integration”: ![Integrations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwceh2fyb0h3bbdqpbk8.png) Make sure all of the fields are marked as the following: ![Markdown integration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bsxymyd7851psugxdk5h.png) Once you save, the custom integration is now listed and available for use on content types. ![Added Integration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l0jwpbh1bn595vlr8bmr.png) ## Add the UI Extension to a Content Type As a developer make sure you create a new [developer project](https://documentation.bloomreach.com/content/docs/development-environment?_ga=2.188316590.729086942.1664529358-129010115.1663337065) and make sure [content type changes](https://documentation.bloomreach.com/content/docs/content-modeling?_ga=2.188316590.729086942.1664529358-129010115.1663337065) are checked! Create or edit a new [content type](https://documentation.bloomreach.com/content/docs/document-type-editor?_ga=2.188316590.729086942.1664529358-129010115.1663337065). Add a new Open UI String field to the content type:​​​​​ ![Open UI String](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzb3a1cd211k6k49ndu6.png) Select the UI Extension from the dropdown: ![Select UI Extension](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/je6auphw3z7rf5ia3hyw.png) Congratulations. You have now successfully added a markdown field to a content type! ![Markdown Field](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3otjrxasq01m5v66r7ed.png) The source code of the markdown editor can be found at: https://github.com/ksalic/markdown-field/
ksalic
1,207,787
The Rising Coder - Week 5/13 (Backend Week 1/3)
Did somebody say it was already Friday? Blimey now I understand why the acronym "TGIF" exists. But...
19,721
2022-09-30T19:04:21
https://dev.to/clam119/the-rising-coder-week-513-backend-week-13-4go3
beginners, codenewbie, bootcamp, javascript
Did somebody say it was already Friday? Blimey now I understand why the acronym "TGIF" exists. But yet again, another brilliant but definitely exhausting week at the Northcoders Bootcamp! Good morning, good afternoon and perhaps good evening, and a huge thank you again for coming to check up on me and my journey here at [Northcoders](https://northcoders.com) As always, feel free to drop me a follow on [Twitter](https://twitter.com/anipi119), [LinkedIn](https://www.linkedin.com/in/christopher-lam-792b90161/) and [GitHub]((https://github.com/clam119))! ## So... How has the fifth week treated me? Perhaps if you have read my previous week's post of comparing how it went to trying to catching a plane that just set off, just imagine that you're stuck in the ocean as the plane that you're trying to catch off has already set off a few hours ago. Yep, that's how I felt this week 😭 The beautiful thing about the structure of Northcoders is that each topic that we learn builds off of concepts that we had covered beforehand. And before you even realise it, you've already learnt so much compared to where you first started! I mean in all honesty, if I showed myself the code I had written in Express to the past me that had just started Northcoders, I would be skeptical and even ask myself "Where did you copy and paste that code?" So to fellow Northcoders who may feel like they have struggled with this week's concepts and perhaps feeling behind, please don't put yourself down and instead look back at what you have done so far and be as proud as I am at your own progress. In order to run we must first walk, and everyone will start walking and running at different times, your time will come! With that said, this week felt significantly more difficult because previously we were dealing with the "Fundamentals" per the name of the block. Where we were mostly gaining an understanding of the data types and the intricacies of how they worked because all of this indefinitely feeds into the rest of the course (Backend/Frontend). ### What I Struggled Most With This week I feel like I struggled when we were thrown into the deep end and were told to learn a new library called Inquirer.js and to use the concepts we had learned to implement the library. I won't sugarcoat it and say that I felt panicked but ultimately was able to take it one step a a time by: * Reading the overview and understanding what the library was and why it was used * Examples of how the library is used and breaking down what syntax is used for what purpose * Reading the documentation on how individual methods in the library work * Breaking down the task from Northcoders and using the library with the concepts covered. I had the head mentor Christian drop in to see how I was doing and he kindly reassured me that what I felt was normal and was a hurdle everyone has to overcome because once we head to the final projects phase, we would have to learn entirely new technologies. A huge shout out to all of the mentors that spent this entire week with scheduled periodic drop-ins and keeping us grounded by empathising with our own experiences, you're all exceptionally wonderful! (P.s. A huge welcome to Mitch as a new addition to the Northcoders team!) ## So, what did we cover this week? This week we built on our understanding of callbacks, asynchronosity and asynchronous functions with the following topics of: * Promises & Promise.all * Creating a HTTP Server With Node's `createServer()` * Introduction to Express and Middleware * The Concept of MVC - Model View Controller * Introduction to SQL ### Introduction to Promises Promises... I promise you that I struggled with this 😂 So, what are promises? A "Promise" is simply an object in JavaScript that is used for asynchronous computation and it represents a placeholder for an asynchronousoperation that has not yet been completed. A promise has the three possible states of: * Pending - This is when the promise has not yet been fulfilled. * Fulfilled - This is when the operations have been a success. * Rejected - This is when the operations have been a failure. In order to "fulfill" or "reject" a promise, we must use the methods of `.then()` to describe what exactly will happen if the promise is successfully fulfilled, and a `.catch()` method for unsucessful operations that rejects the promise. Although this may look strange to those of you who may be unfamiliar with promises, please bare with me: ```js promise .then((response) => { // handle success console.log(response); }) .catch((error) => { // handle error console.log(error); }); ``` The chaining of `.then()` followed by the `.catch()` is only possible because a "Promise" is an object that has the methods (Functions) that are inherited by each new instance of a promise. Therefore, simply accessing the methods with dot notation and adding a callback function will allow us to add computations upon fulfillment/rejection of the promise. ### `Promise.all()` I admit that I am one of the guilty Northcoderians that thought it was somewhat therapeutic to chain `.then()`all over the screen 😂 So what exactly is this method and what does it do? Simply put, it will take in all of the pending promises, store them in a single array and with a single `.then()` and `.catch()` block will resolve all of the promises in one. If a single promise within the `Promise.all()` fails then it will automatically go straight into the `.catch()` block, and is definitely something I struggled to fully comprehend. But alas, the concept of `Promise.all()` is something that is something that will eventually click once you finally hit a bottleneck of having many promises that needs to be resolved all in one location. Here's an example that I will briefly just like to highlight how useful the `Promise.all()` method was and how that helped somewhat shine the light on how to use it: ```js const fetchAllPets = () => { return fs.readdir('./data/pets').then((pets) => { const allPetsData = pets.map((pet) => { return fs.readFile(`./data/pets/${pet}`, 'utf-8') }) return Promise.all(allPetsData); }).then((pets) => { const parsedPets = pets.map((stringifiedPets) => { return JSON.parse(stringifiedPets); }); return parsedPets }) ``` So this was an example of a function that me and my pair had wrote for one of the sprints this week. For this example, "fs" which stands for node's "File System" is a module that is used to get access to the local file system and do something with them. We are using the "promises" version that requires you to resolve the promise before data is sent back. But the most important part here is that when we define a new variabled called `allPetsData` which maps over the array of data that we get back from reading the directory `./data/pets` is that this within itself is an array of around 13 pending promises. Imagine if we had tried to resolve this individually, and how many lines of code we would need to use! Instead we just return all of the promises into an array that we then later use for further computation, such an elegant and clean way of doing things! ### Creating HTTP Servers With `.createServer()` Last week I touched upon HTTP methods, CRUD operations and the anatomy of a URL. This time we were put to the test by using Node's module called HTTP that allows us to effectively create our own server running on the localhost on the port of our choice! First we would have to require in the HTTP module, create a "server" variable that invokes this HTTP module with a callback, and finally to have the server listen to a port on our machine so that it can pick up requests. ```js const http = require('http') const fs = require('fs') const fsPromises = require('fs/promises') const server = http.createServer((req, res) => { // } server.listen(9090, (err) => { if(err) console.log(err); else console.log('Successful connection on port 9090'); }) ``` With this setup, we can now deep dive into a bit of the anatomy in the example above: * The `const server` is just the invocation of the `createServer` method on the `http` object that we had required in. * This `createServer()` method takes in a callback that has the "Req/Request" and "Res/Response" as its arguments. * The "Request" is an object that the server receives when a client user makes a "method" request to our server and its endpoints. * The "Response" is also an object but this is what we use to send data over to the user. ### Writing Responses Guideline I believe that this small section is important because being able to successfully send a response back to the user is paramount to both the server and its users, so try and remember the following: * A response will always require a "body" of information that will be sent back to the user. * The response body will need to be of a string data type. * It's recommended that you include a "Header" in your response - this contains metadata that will be sent back to the user pertaining to the response made. ### Introduction to Express A quick recap as to what a "parametric endpoint" is - it's simply a path that has a placeholder that is dictated by what the client puts. E.g. `https://northcoders.com/api/users/:userId` When we created our own servers with the `HTTP.createServer()` module, we would have to do *really ugly* computation in order to deal with parametric endpoints. Here's an example of how messy it can look when we had to deal with a parametric endpoint that ends with a digit. ```js if (/\d+$/g.test(req.url) && req.method === "GET") { // } ``` Not exactly the easiest code to understand and it can definitely lead to a lot of confusions, but with the *minimalistic* backend framework of Express.js, our lives are somewhat a little bit easier! Express is described as a minimalistic framework because it allows us to concentrate on dealing with the requests coming in and serving up responses to the enduser. With Express we had access to a wide variety of tools such as: * Req.params - This is what we will be using to deal with dynamic parametric endpoints, no more Regex conditionals. * Req.query - This is what we will be using to get access to the queries being parsed in the request by the client. * `Res.send()`- This is the method we invoke with a string of data to be sent back to the user. * `Res.status()` - This is the method that we can append to the response being sent back to the user that will contain the status code depending on the HTTP method and if it was successful or not. I could go on all day about how relieving it was to go from Node's HTTP Module to Express and its features, but I'll save that for another day! ### Introduction to the MVC Architecture The MVC Architecture simply stands for "Model View Controller" and is a simple way of setting up your backend server so that each individual component is responsible for its own functionality. And as such the functionalities are: * Model - This will deal with data in storage - whether this is reading/writing/updating/deleting data locally or on a database. * View - This is what will be sent back to the user in the form of a HTML file and corresponding CSS/JS files when the server is finished rendering. * Controller - This will handle the requests & responses, and as such is classed as the "Middleware Function". The controller is responsible for taking in requests, sending instructions to the model to retrieve data and send it back to the user. Previously myself and my pair were bewildered at the concept and questioned why such a concept existed, in which the answer lied in "structure" and "scalability". Sure, we would be able to create an entire server in one file that handled all three, but if we ever wanted to scale p and perhaps change the "model" that handles the data, then this would be difficult because there would be an infinite amount of code to filter through just to find data pertaining to that model. Although it makes sense, refactoring it into the MVC Architectural model definitely took a bit of time to get used to and understand! ### SQL - My Dreaded Enemy For so many years I have avoided SQL because of its very nature that just absolutely scared me. But after spending an entire day trying to get to grips with it, I can say that it is something I will need to spend more time with 😭 So, what is SQL? SQL stands for "Structured Query Language" that is based on "relational databases", this means that it has columns and rows of data for each table in a single database, and you're able to "relate" data between tables by doing pretty neat things such as "joining" them together and creating an entirely new table by using data from multiple tables. Beyond this I could try and explain the complete intracies of the language but then I'd end up writing the documentation, so just know that it's a logically structured language that is useful when making complex queries! ### Thoughts Going Forward I believe that after having talked with Scarlett (one of our great mentors) that we don't have to mess around too much with vanilla SQL with the introduction to "Node-postgres" that has made me feel more at ease and so I'm just ready to enjoy my weekend, and I hope you all are too! Honestly that's about it for this week's blog, but if you have gotten this far then thank you again for taking the time to read and I will see you next week! ### Socials Feel free to give me a follow on any of my socials down below! * [GitHub](https://github.com/clam119) * [Dev.To](https://dev.to/clam119) * [LinkedIn](https://www.linkedin.com/in/christopher-lam-792b90161/)
clam119
1,207,936
A Rounded Solution to Image Handling on the OpenSauced Dashboard
This post is about the journey we took to improve the OpenSauced Dashboard, which provides valuable...
0
2023-05-17T21:46:54
https://dev.to/opensauced/a-rounded-solution-to-image-handling-on-the-opensauced-dashboard-4n34
react, javascript, cloudinary, github
*This post is about the journey we took to improve the OpenSauced Dashboard, which provides valuable insights on open-source contributions.* Last October, we had an alpha launch of the new OpenSauced Insights dashboard. This launch was to coincide with the biggest open-source hackathon, Hacktoberfest. Our goal was to provide insights and reports on the number of contributions accepted, merged, and even spam. Since then, the dashboard has evolved beyond Hacktoberfest data and now includes GitHub avatars representing active and open GitHub Pull Requests in the last 30 days. _If you’d like to see your open source contributions [connect your GitHub account to OpenSauced](https://insights.opensauced.pizza?utm=dev)._ ## Challenges Faced At the beginning, our team encountered challenges in sourcing and manipulating GitHub avatars to display a scatter plot e-chart on the dashboard. We struggled to find a suitable solution, especially when it came to handling a large number of images efficiently. As part of this challenge, GitHub has an aggressive rate limit when you're not authenticated when requesting resources, like avatars from a URL. We needed to fetch the avatar, manipulate them, and then cache them. Because our product was meant to be public, we would attract a lot of usage, and it could cost real money quickly. ## Solution: Leveraging nivo and Cloudinary To overcome these challenges, the team turned to the **nivo chart library** for visualizations and **Cloudinary** for image manipulation and caching 271k images. We successfully integrated nivo charts into our dashboard, thanks to its rich set of data visualization components built on top of d3 and React. By leveraging Cloudinary's image manipulation capabilities and caching strategy, we were able to round the avatars and seamlessly integrate them into the dashboard. ![dashboard screenshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u425gz6xkwdztzg2ucsj.png) Today this works thanks to [nivo](https://nivo.rocks/) and [Cloudinary](https://cloudinary.com/), but that journey included a lot of trials and testing for the perfect solution. _Nivo is an open-source library that provides a rich set of data visualization components built on top of the awesome d3 and React libraries._ {% github https://github.com/plouc/nivo %} _Cloudinary is a hosted solution to manipulate and cache images for reuse._ ## Handling Image Processing with Apache E-charts Before adopting nivo and Cloudinary, we initially used Apache E-charts (specifically a React wrapper called [echarts-for-react](https://github.com/hustcc/echarts-for-react)) to handle image processing and loading. This approach proved extremely slow, and it was quite the process in figuring out a better solution while we had a real-time constraint. ## The Journey to Finding Solutions We faced a challenge being able to provide images on the page that were sourced directly from GitHub. The avatars needed could be cached and manipulated to match our rounded image design. The total number of displayed contributors during the event was around 150k, and today close to 300k contributors are represented as contributions in the most popular open source repositories on OpenSauced. ![dashboard image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dl5oeh1g6u1xufwxku4y.jpg) The e-charts for React solution gave us no access to the images when rendered, and provided limited options to edit the chart after it was displayed. We built our product in Figma designs first and were excited at the opportunity to have rounded images. Still, our e-chart library would only allow plain & squared images, and also any manipulation of the image was the challenge. [design] ![Rounded image design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8s9lcqb7qt3uuittb26r.png) [reality] ![harsh square image design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8i8oaapjzsdjgvsfa6d3.png) There comes a time in every developer's career where you are presented a design that may be just out of reach or out of scope. The simplest request around the images broke our charts and required the entire team to brainstorm solutions. You can see that brainstorming in our now closed issues. {% github https://github.com/open-sauced/insights/issues/373#issuecomment-1249618915 %} ## Solution 1: JavaScript Approach Our first attempt was manually rounding the images using a quick lib function I threw together. ```js function applyBorderRadius(imageElement) { imageElement.style.borderRadius = '50%'; } // Get the image element var image = document.getElementById('myImage'); // Apply the 50% border radius applyBorderRadius(image); ``` It's always good to start with an approach, even if it is wrong. This first solution needed to be corrected and didn't work because the images were limiting the e-chart with images on a scattered chart. We could not get direct access to the elements to manipulate after the fact. We needed a way to manipulate the images before sending to the chart. ## Solution 2: Imagemagick for Rounding Images [ImageMagick](https://imagemagick.org/index.php) is a fun open-source platform for displaying, creating, converting, modifying, and editing images. I had some experience working with ImageMagick at a previous employer and quickly found a solution to round the images before sending to the chart. But when I found a [stackoverflow answer](https://stackoverflow.com/questions/67342758/displaying-rounded-images-in-github-pages-for-profile-image-using-markdown) doing something similar, I opted to use that instead. ``` https://images.weserv.nl/?url=https://www.github.com/bdougie.png?size=60?v=4&h=300&w=300&fit=cover&mask=circle&maxage=7d ``` [Link to this solution](https://images.weserv.nl/?url=https://www.github.com/bdougie.png?size=60?v=4&h=300&w=300&fit=cover&mask=circle&maxage=7d) This was working, but we still needed to solve the caching issue. I really wanted to try out building a service to use ImageMagick + the new [Supabase storage](https://supabase.com/storage) to do this, but I wasn't willing to maintain that solution and we only had a little time to explore more unique tools. (_If you want to build a service like this, find me, and I would love to be a beta tester._) We needed a way to cache the images before sending to the chart and started looking at tools or services to make this easier. ## Solution 3: Leveraging Cloudinary for Image Manipulation and Caching To solve the caching issue and optimize image processing, we explored different tools and services. Cloudinary offers image manipulation and a caching strategy. They also have a generous free tier--but for full transparency, I'll point out that our initial amount of data for Hacktoberfest was pushed up to the paid tier immediately. ``` https://res.cloudinary.com/bdougie/image/fetch/f_auto,q_auto/w_400,h_400,c_crop,r_400,g_auto/v1/https://avatars.githubusercontent.com/u/5713670 ``` [Link to Cloudinary solution](https://res.cloudinary.com/bdougie/image/fetch/f_auto,q_auto/w_400,h_400,c_crop,r_400,g_auto/v1/https://avatars.githubusercontent.com/u/5713670) You can see the PR with the solution live as well. {% github https://github.com/open-sauced/insights/pull/467 %} The solution was building a wrapper around using the GitHub avatar as the `imageUrl`. ```js // lib/utils/roundedImages const roundedImage = (imageUrl: string, cloudName: string | undefined) => { return cloudName ? `https://res.cloudinary.com/${cloudName}/image/fetch/c_fill,g_face,h_300,w_300,bo_20px_solid_white,r_max/f_auto,e_shadow/${imageUrl}` : imageUrl; }; // components/organisms/Dashboard/dashboard.tsx import roundedImage from "lib/utils/roundedImages"; scatterChartData = prs.map(({ updated_at, linesCount, author_login }) => { const author_image = author_login.includes("[bot]") ? "octocat" : author_login; const data = { x: calcDaysFromToday(new Date(updated_at)), y: linesCount, contributor: author_login, image: roundedImage(`https://www.github.com/${author_image}.png?size=60`, process.env.NEXT_PUBLIC_CLOUD_NAME) }; return data; }); } ``` _[Link to code]([components/organisms/Dashboard/dashboard.tsx](https://github.com/open-sauced/insights/blob/beta/components/organisms/Dashboard/dashboard.tsx))_ The interaction with Cloudinary is all URL based, allowing us to pass the GitHub user id as an option on the fly. The same id is also used to recall the cached image, meaning we have had caching by default. We also did not need to build any new infrastructure for this. With the initial 150k user profiles cached, we immediately needed to pay $90 per month for the pro tier, which we felt is reasonable and predictable for us. We are still on the same Cloudinary plan today, mainly due to the inertia and the fact that we still need to be ready to build and maintain something not core to our product. So far, Cloudinary is caching 271k image transforms for us. This scenario is a classic build vs buy scenario where we could build our own caching and image storage on S3, but the tech debt imposed on a solution like that was not our priority. The $90 was also not going to put us into debt either. ## Migration to nivo Charts In December, we made the switch from Apache E-charts to nivo charts due to its modern features and active community. The decision was driven by the outdated state of E-charts and the limited contributions it received. We plan to contribute upstream to nivo to improve its integration and interaction with images. We have an active conversation in the nivo discussions about this and other feature enhancements. The maintainer has been really responsive, and we look forward to contributing upstream to support the project. {% github https://github.com/plouc/nivo/issues/2201 %} If you have any thoughts or comments about our approach or have alternative solutions to share, we welcome your input. Let's continue learning from each other and enhancing our open-source projects.
bdougieyo
1,208,298
NestJS Micro-services 101 Part 1 - Try to communicate btw two services
I try to learn NestJS Framework to make micro-services Objective in Part 1 send simple...
0
2022-10-01T16:50:05
https://dev.to/mossnana/nestjs-micro-services-101-part-1-try-to-communicate-btw-two-services-1i8i
microservices, nestjs, javascript, rabbitmq
I try to learn NestJS Framework to make micro-services ## Objective in Part 1 send simple message from one service to another service ```bash // Project 1 api-gateway nest new api-gateway // Project ที่ 2 user-service nest new user ``` --- User Request (http://localhost:3000) -> API Gateway -> User --- Difference btw bootstrap in `api-gateway` and `user` api-gateway `main.ts` ![api-gateway](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4b59it9jxalrl84cotsx.png) user `main.ts` ![user](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xzvawff44t2aqiewlth.png) --- api-gateway service will know user service by register in api-gateway module ![api-gateway-module](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ri96oduucdgog7uuwdtb.png) --- so you can declare user service proxy in constructor to send data into it, by `hello` is event pattern ![user-service-send](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tl7pbp0f7imj5vlf996v.png) --- user service controller will receive event pattern name `hello` by use `EventPattern` decorator ![user-controller](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jydd42ign4xaykhsqwd3.png) --- So, this is response ![response](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6hgysjsct5tu2fmd0sgv.png) --- Full code [https://github.com/mossnana/nestjs/tree/microservices](https://github.com/mossnana/nestjs/tree/microservices)
mossnana
1,209,036
Doomsday Algorithm
The Doomsday rule, Doomsday algorithm or Doomsday method is an algorithm of determination of the day...
0
2022-10-02T09:06:30
https://dev.to/msobkyy/doomsday-algorithm-1ahf
javascript, math, react
The Doomsday rule, Doomsday algorithm or Doomsday method is an algorithm of determination of the day of the week for a given date. The algorithm is simple enough that it can be computed mentally, some people could usually give the correct answer in under two seconds ## How The Doomsday Algorithm work ? Doomsday Algorithm takes advantage of each year having a certain day of the week upon which certain easy-to-remember dates, called the doomsdays for example, the last day of February, 4/4, 6/6, 8/8, 10/10, and 12/12 all occur on the same day of the week in any year, we can use that to determine what day of the week for any date in the year if we know what day is the doomsdays for our year After you have determined which day will be the doomsdays for the year you want to know, you can simply find out any day of the week for any date in the year by counting forward/backward from the nearest doomsday to the date you want to know The question here is how can we determine what day the doomsdays will be for the year we want to know? ## The Rules of Doomsday Algorithm I created an application based on this algorithm to determine the day of the week with an explanation of the algorithm directly on the date you entered. You can view it from here : [https://doomsday-algorithm.vercel.app/](https://doomsday-algorithm.vercel.app/) Let’s take a random date and explain the algorithm to it for example 7/9/1812 1. Start by taking the last two digits of the year, this is your startingNumber in our case it will be (12) 2. Divide the startingNumber (12) by four and ignore the remainder : 12 / 4 = 3 3. Add the last result (3) to the startingNumber : 3 + 12 = 15 4. Find the remainder of last result (15) divided by 7 : 15 % 7= 1 ,(Or find the nearest number less than 15 and divisible by 7 and substravted from 15) Now we want to find what is the century’s anchor day , we can simply know that from that table here The table in repeat you can easily know the doomsday for any century ![Doomsday Century](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39j81dkfjb5ddfk33fa2.png) - After we determined the century’s anchor day Friday we will add the last result to that day _**Friday+ (1) = Saturday**_ Now we know the doomsday for a date year 1812 is Saturday we need to Find the closest doomsday and we can do that according to this table ![Doomsday Month](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/us5v5q5w6xpjrqhglcbs.png) > (Note if the year is a leap year the doomsday for January will be 4 and February will be 29 and if is not a leap year January will be 3 and February will be 28) we know now 5 September is Saturday now we can simply count from 5/9 to 7/9 the result will be **Monday** > ( Note: If the difference between the two dates is large, you can subtract or add 7 until you reach the nearest number to the specified date, and then you can count ) **Resources** [Doomsday rule](https://en.wikipedia.org/wiki/Doomsday_rule)
msobkyy
1,209,076
In One Minute : Swagger
Swagger is a suite of tools for API developers from SmartBear Software and a former specification...
20,049
2022-10-02T11:18:27
https://dev.to/rakeshkr2/in-one-minute-swagger-h59
api, beginners, programming, oneminute
Swagger is a suite of tools for API developers from SmartBear Software and a former specification upon which the OpenAPI Specification is based. The Swagger API project was created in 2011 by Tony Tam, technical co-founder of the dictionary site Wordnik. {% embed https://youtu.be/9dovf71KShY %} Swagger's open-source tooling usage can be broken up into different use cases: development, interaction with APIs, and documentation. When creating APIs, Swagger tooling may be used to automatically generate an Open API document based on the code itself. This embeds the API description in the source code of a project and is informally called code-first or bottom-up API development. Using the Swagger Codegen project, end users generate client SDKs directly from the OpenAPI document, reducing the need for human-generated client code. As of August 2017, the Swagger Codegen project supported over 50 different languages and formats for client SDK generation. When described by an OpenAPI document, Swagger open-source tooling may be used to interact directly with the API through the Swagger UI. Official Website :- https://swagger.io/ Open API :- https://openapis.org/
rakeshkr2
1,209,178
AWS CLI vs Azure CLI
Hello, everyone! I want to compare AWS CLI and Azure CLI. I want to make this comparison simple. So...
20,010
2022-10-02T13:41:47
https://dev.to/berviantoleo/aws-cli-vs-azure-cli-158c
aws, azure, cli, opensource
Hello, everyone! I want to compare AWS CLI and Azure CLI. I want to make this comparison simple. So here we are, | Feature | AWS CLI | Azure CLI | |:---:|:---:|:---:| | Platform | Have all common OS (Linux, MacOS, and Windows) + Docker. [Details](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) | Have all common OS + Docker. [Details](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) | | Update CLI | Use the same way as installing | Have specific command to update `az upgrade` and support automatic update. [Details](https://learn.microsoft.com/en-us/cli/azure/update-azure-cli) | | Source Code/Repository (Anyway, the star count maybe will be updated in the future) | Open Source. Have 12.9k stars ![stars aws](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uzelr97rwlvzommcktiv.png) [Details](https://github.com/aws/aws-cli) | Open Source. Have 3.3k stars. ![stars az](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f70t63i9538ve93gnj7a.png) [Details](https://github.com/Azure/azure-cli) | | Programing Language | Python ![aws cli lang](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hgzkrcy8dtkgk3u81ul5.png) | Python ![az cli lang](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k4g5to9v7oemximer9yu.png) | | Contributors (The number maybe will be updated in the future) | 309 ![aws contributors](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n1xeflegzemucdd3ug08.png) | 864 ![az contributors](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0idjgyr3p1xdn7gvqqai.png) | | Configure CLI (Credentials) | - Manual `aws configure` <br> - Environment variables <br> - Shared credentials file <br> - Config file <br> - IAM Role <br> **Note**: Basically, you only need to provide profile + the credentials (ACCESS_KEY_ID & SECRET_ACCESS_KEY) <br> [Details](https://github.com/aws/aws-cli#configuration) | - Interactive sign in `az login` <br> - provide username & password `az login -u <username> -p <password>` <br> - Using service principal `az login --service-principal -u <app-id> -p <password-or-cert> --tenant <tenant>` <br> - Using managed identity `az login --identity` <br> [Details](https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli) | | License | [Apache 2.0](https://github.com/aws/aws-cli/blob/develop/LICENSE.txt) | [MIT](https://github.com/Azure/azure-cli/blob/dev/LICENSE) | ## Thank you ![Thanks](https://media.giphy.com/media/BYoRqTmcgzHcL9TCy1/giphy.gif)
berviantoleo
1,209,182
SFDX Commands in Action
The Salesforce CLI is a powerful command line interface that simplifies development and build...
0
2022-10-02T14:33:47
https://dev.to/discoversalesforce/sfdx-commands-in-action-1a1p
salesforce, sfdx, commands, salesforcecli
The Salesforce CLI is a powerful command line interface that simplifies development and build automation when working with your Salesforce org. Over the past few years I am using multiple sfdx commands from deploying metadata to running code snippet using sfdx command. _**> Here is the consolidated list of all commands which are needed by developer during complete development life cycle**_ --- ## 1. Retrieve code/metadata from org _How to retrieve source from an org?_ sfdx commands are very flexible different types of metadata can be retrieved together. ``` //Example 1: Retrieve single file sfdx force:source:retrieve -p force-app\main\default\triggers\AccountTrigger.trigger //Example 2: Retrieve multiple file sfdx force:source:retrieve -p force-app\main\default\triggers\AccountTrigger.trigger,force-app\main\default\classes\TriggerHandler.cls ``` --- ## 2. Deploy code/metadata to org _How to deploy source to an org?_ sfdx commands are very flexible different types of metadata can be deployed together. ``` //Example 1: Deploy single file sfdx force:source:deploy -p force-app\main\default\triggers\AccountTrigger.trigger //Example 2: Deploy multiple file sfdx force:source:deploy -p force-app\main\default\triggers\AccountTrigger.trigger,force-app\main\default\classes\TriggerHandler.cls ``` --- ## 3. Run Test class _Can we run test class or test method using sfdx commands?_ Yes, here are few useful options using sfdx commands ``` //Running complete test class sfdx force:apex:test:run -n "TestClass1,TestClass2" //Another Way sfdx force:apex:test:run -t "TestClass1,TestClass2" //Run only specified test methods sfdx force:apex:test:run -t "TestClass1.method1,TestClass2.method2" ``` --- ## 4. Execute code snippet _Definitely I do need to open developer console for executing code snippets?_ No, sfdx command is here for executing code snippets directly from terminal. 😀 ``` sfdx force:apex:execute ``` --- ## 5. Check logs using sfdx command _How can I see logs stream in VSCode terminal?_ Here is the sfdx command… ``` sfdx force:apex:log:tail ``` --- ## 6. Login to org directly from cli _Did you forgot the password of sandbox? But you don’t want to reset it, since this sandbox is already setup and authorised by couple of other 3rd party applications like vscode,xl connector, data loader, metazoa, etc._ You are lazy like me to update password in all of these applications. Here is the sfdx command to rescue us. ``` sfdx force:org:open -u orgAlias ``` --- Happy Learning !!!
discoversalesforce
1,209,295
An Introduction to SQL
Introduction to SQL So, what exactly is SQL? The name SQL is an acronym for Structured...
0
2022-10-02T23:07:29
https://dev.to/zachmarullo/an-introduction-to-sql-3dk6
sql, beginners
## Introduction to SQL So, what exactly is SQL? The name SQL is an acronym for Structured Query Language, but is sometimes referred to as "Sequel". SQL is a database querying language used to extract, insert, delete and create records inside relational database management systems or RDBMS. Although SQL has many more operations that can be performed, we will stick with the basics as this is meant to be an introduction to the language. ## What is a database? According to [oracle.com](https://www.oracle.com/za/database/what-is-database/), a database is an organized collection of structured information, or data, typically stored electronically in a computer system. A database is usually controlled by a database management system (DBMS). Data is most typically organized in rows and columns which are then put into separate tables in order to make querying them effective. Over the years, databases have evolved substantially from a simple tree-like model that only allowed a one-to-many relationship. Some of the newer, more flexible types of database are relational databases, object-oriented databases, and most recently cloud databases and self-driving databases. ![Example of a Database Structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/05ienurx2d697t8j7exe.png) ## What is SQL and why is it used? Back in the days before SQL, before the internet, databases used to be kept as actual files in file cabinets. I'm sure its not difficult to imagine how tedious it would be to have to go through a plethora of files in filing cabinets, and manually have to compile the data you wanted as well as transforming it. This is part of the job of SQL. As mentioned above, SQL is used to query databases of potentially millions of pieces of data and return only the relevant information the searcher has specified. Databases can not only contain millions of data entries, but also can have multiple different fields for each entry. This is where SQL can come in handy by returning only the values specified by the person querying the database. For example, if you wanted to query a database of soccer players, but wanted the data returned to be filtered in some way(such as height, name, or weight), SQL could return just that information. ## A brief history of SQL According to Chad Brooks of businessnewsdaily.com, > The SQL programming language was developed in the 1970s by IBM researchers Raymond Boyce and Donald Chamberlin. The programming language, known then as SEQUEL, was created following Edgar Frank Codd’s paper, “A Relational Model of Data for Large Shared Data Banks,” in 1970. Since its development, according to W3 Schools, SQL has become a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The most recent version for SQL was published in 1992 by ANSI. ## Basic SQL syntax Depending on the SQL database you're using, you may or may not need to use the semicolon character. This is because commands in SQL can sometimes span several lines, and some programs require the semicolon to signify the end of a command. As far as case sensitivity goes, SQL as a language is not case sensitive, but uppercase is most frequently used for command lines. There are different versions or dialects of the SQL language, but in order to be compliant with ANSI standards, they all support the major commands. These commands include: - SELECT The select keyword in SQL retrieves information from one or more than one database table - UPDATE The update keyword allows you to update information within a database - DELETE Allows you to delete information from database(s) - INSERT Allows you to insert information into database(s) - WHERE "WHERE" is used to filter data and return records that meet the chosen condition Example of "SELECT", "FROM", and "WHERE" taken from w3schools.com: `SELECT column1, column2, ... FROM table_name WHERE condition;` On the "SELECT" line of the code example above, the column to be queried is chosen. "FROM" is used to choose which particular table you are trying to access. The "WHERE" condition can be set to return a particular ID associated with the data, or other constraints as mentioned in the intro of this post. ## Popular SQL Databases in 2022 There are many SQL databases, but three of the most popular, according to [DB-Engines](https://db-engines.com/en/ranking) are: - Oracle: Oracle is the most widely used SQL database currently, however it is not seen as beginner friendly because the user may need a large amount of SQL knowledge to use the database effectively. - MySQL: MySQL is a mostly open source, free-to-use database(Commercial license is required to use for businesses). It is also considered beginner friendly as anyone can download and begin to use MySQL in a short period of time. One of the drawbacks of using MySQL is that it lacks an efficient debugging tool relative to paid databases. - Microsoft SQL Server: Microsoft SQL Server has a good source of documentation online, and boasts that it is the most secure database engine over the last 10 years. ## What jobs use SQL? As I'm sure you would expect, software developers hit the top of this list because SQL is so useful for using data from databases to make graphs or keep statistics on web pages and on apps. However, SQL seems to be used widely by many different jobs. Some of these include business analysts, data scientists, financial consultants and many more professions. ## Conclusion All in all, SQL is an extremely versatile querying language, and can be used to do a multitude of different things. It is used for many different things from sports statistics to customer databases to finance. If this blog post has grabbed your interest, I have included a couple videos below that I found to be good introductions to the SQL language. _Helpful Videos_ [Learn Web Code: MySQL for beginners](https://youtu.be/Yw3NNvqk-2o) [Basic Intro/Explanation to SQL](https://www.youtube.com/watch?v=27axs9dO7AE) **Sources** [Business News Daily](https://www.businessnewsdaily.com/5804-what-is-sql.html) [Wikipedia](https://en.wikipedia.org/wiki/SQL) [NIST Gov](https://www.itl.nist.gov/div897/ctg/dm/sql_info.html) [W3 Schools](https://www.w3schools.com/sql/sql_intro.asp) [Top Databases 2022](https://towardsdatascience.com/top-databases-to-use-in-2022-what-is-the-right-database-for-your-use-case-bb8d3f183b21#:~:text=2.-,MySQL,need%20to%20purchase%20a%20license.) [DB-Engines Current Rankings](https://db-engines.com/en/ranking) [Microsoft SQL Server Info](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2022#Description)
zachmarullo
1,209,436
Playwright with Cucumber/JUnit 5 - Dependency Injection with PicoContainer
Dependency Injection This project uses dependency injection to put the instance of...
20,016
2022-10-03T00:25:35
https://dev.to/terencepan/playwright-with-cucumberjunit-5-dependency-injection-154l
cucumber, java, testing, playwright
### Dependency Injection This project uses dependency injection to put the instance of TestContext into the step definition class. In a larger test project you would have multiple step definitions and you want an easy way to create common instances of things like the Playwright instance and BrowserContext. This also allows you to share data between steps in the same scenario like when we get and set the alert text. This is implemented by including the maven dependency for pico-container earlier in the series and injecting this class through the Constructor of the step classes needing this class. Constructor code in DemoSteps.java: ```java public DemoSteps(TestContext testContext) { this.testContext = testContext; this.browser = testContext.getBrowser(); } ``` Code using dependency injection to store alert text between steps in the same scenario: ```java @When("User clicks submit") public void userClicksSubmit() { DemoPage demoPage = new DemoPage(page); String alertText = demoPage.clickSubmit(); testContext.setAlertText(alertText); } @Then("Verify alert {string}") public void verifyAlertToFillInResultIsShown(String alertText) { Assertions.assertEquals(alertText, testContext.getAlertText()); } ``` Full **TestContext.java** class ```java package io.tpan.steps; import com.microsoft.playwright.*; import io.cucumber.java.AfterAll; import io.cucumber.java.BeforeAll; public class TestContext { protected static Playwright playwright; protected static Browser browser; protected BrowserContext browserContext; protected Page page; @BeforeAll public static void beforeAll(){ playwright = Playwright.create(); browser = playwright.chromium().launch(new BrowserType.LaunchOptions() // or firefox, webkit .setHeadless(false) .setSlowMo(100)); } @AfterAll public static void afterAll(){ browser.close(); playwright.close(); } public Browser getBrowser() { return browser; } String alertText; public String getAlertText() { return alertText; } public void setAlertText(String alertText) { this.alertText = alertText; } } ``` As always code is on available on [Github](https://github.com/terencenmnpan/TestAutomation/tree/main/PlaywrightCucumberExample)
terencepan
1,209,534
What is Flow API in Kotlin?
In this blog, we are going to learn what is Flow API in Kotlin.
0
2022-10-03T03:44:40
https://amitshekhar.me/blog/flow-api-in-kotlin
kotlin, android
--- title: What is Flow API in Kotlin? published: true description: In this blog, we are going to learn what is Flow API in Kotlin. tags: kotlin, android cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xmx1wpdgns4x246lkx1i.png canonical_url: https://amitshekhar.me/blog/flow-api-in-kotlin --- I am [**Amit Shekhar**](https://amitshekhar.me), a mentor helping developers in getting high-paying tech jobs. In this blog, we are going to learn what is Flow API in Kotlin. Kotlin provides many features out of the box that we can use to perform various tasks in our project. When it comes to Android Development, the Flow API in Kotlin is very useful. This article is for anyone who is curious about the Flow API in Kotlin but has no idea what it is exactly. The goal is to make you understand what is Flow API in Kotlin. If you understand what Flow API is, then my mission will be accomplished. If you read this article completely, I am sure my mission will be accomplished. **This article was originally published at [amitshekhar.me](https://amitshekhar.me/blog/flow-api-in-kotlin).** Let's begin. **Flow is an asynchronous data stream(which generally comes from a task) that emits values to the collector and gets completed with or without an exception.** This will make more sense when we go through the example. Let's take a standard example of image downloading. **Assume that we have a task:** To download an image, emit the items(values) which are the percentage of the image downloading like 1%, 2%, 3%, and so on. It can get completed with or without an exception. If everything goes well, the task will be completed without an exception. But, in case of network failure, the task will be completed with an exception. So, there will be a task that will be done and will emit some values which will be collected by the collector. Now, let's discuss the major components of Flow. The major components of Flow are as below: - Flow Builder - Operator - Collector Let's understand this with the following analogy. | | | | | :----------- | :-: | :------------- | | Flow Builder | -> | **Speaker** | | Operator | -> | **Translator** | | Collector | -> | **Listener** | | | | | ### Flow Builder In simple words, we can say that it helps in doing a task and emitting items. Sometimes it is just required to emit the items without doing any task, for example, just emit a few numbers (1, 2, 3). Here, the flow builder helps us in doing so. We can think of this as a **Speaker**. The Speaker will think(do a task) and speak(emit items). ### Operator The operator helps in transforming the data from one format to another. We can think of the operator as a **Translator**. Assume that the Speaker is speaking in French and the Collector(Listener) understands English only. So, there has to be a translator to translate French into English. That translator is an Operator. Operators are more than this actually, using the operator, we can also provide the thread on which the task will be done. We will see this later. ### Collector The collector collects the items emitted using the Flow Builder which are transformed by the operators. We can think of a collector as a **Listener**. Actually, Collector also comes under the operator which is known as Terminal Operator. The collector is a Terminal Operator. For now, we will skip the Terminal Operator as that is not needed for this blog on Flow API. **Flow API Source Code** The Flow interfaces look like the below in the source code of Coroutines: ```kotlin public fun interface FlowCollector<in T> { public suspend fun emit(value: T) } ``` ```kotlin public interface Flow<out T> { public suspend fun collect(collector: FlowCollector<T>) } ``` ## Hello World of Flow ```kotlin flow { (0..10).forEach { emit(it) } }.map { it * it }.collect { Log.d(TAG, it.toString()) } ``` | | | | | :----------- | :-: | :--------------- | | `flow { }` | -> | **Flow Builder** | | `map { }` | -> | **Operator** | | `collect {}` | -> | **Collector** | | | | | Let's go through the code. - First, we have a flow builder which is emitting 0 to 10. - Then, we have a map operator which will take each and every value and square(it \* it). The map is Intermediate Operator. - Then, we have a collector in which we get the emitted values and print them as 0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100. **Note: When we actually connect both the Flow Builder and the Collector using the collect method, then only, it will start executing.** Now it's time to learn more about the Flow Builder. ## Types of flow builders There are 4 types of flow builders: 1. `flowOf()`: It is used to create flow from a given set of items. 2. `asFlow()`: It is an extension function that helps to convert type into flows. 3. `flow{}`: This is what we have used in the Hello World example of Flow. 4. `channelFlow{}`: This builder creates flow with the elements using send provided by the builder itself. Examples: `flowOf()` ```kotlin flowOf(4, 2, 5, 1, 7) .collect { Log.d(TAG, it.toString()) } ``` `asFlow()` ```kotlin (1..5).asFlow() .collect { Log.d(TAG, it.toString()) } ``` `flow{}` ```kotlin flow { (0..10).forEach { emit(it) } } .collect { Log.d(TAG, it.toString()) } ``` `channelFlow{}` ```kotlin channelFlow { (0..10).forEach { send(it) } } .collect { Log.d(TAG, it.toString()) } ``` At the end of this article, we will also learn to create Flow using Flow Builder. Now we need to learn about the `flowOn` operator. ## `flowOn` Operator `flowOn` Operator is very handy when it comes to controlling the thread on which the task will be done. Usually, in Android, we do a task on a background thread and show the result on the UI thread. Let's see this with an example: We have added a delay of 500 milliseconds inside the flow builder to simulate delay. ```kotlin val flow = flow { // Run on Background Thread (Dispatchers.Default) (0..10).forEach { // emit items with 500 milliseconds delay delay(500) emit(it) } } .flowOn(Dispatchers.Default) ``` ```kotlin CoroutineScope(Dispatchers.Main).launch { flow.collect { // Run on Main Thread (Dispatchers.Main) Log.d(TAG, it.toString()) } } ``` Here the task inside the flow builder will be done on the background thread which is `Dispatchers.Default`. Now, we need to switch it to the UI thread. To achieve that, we need to wrap our collect API inside the launch with `Dispatchers.Main`. This is how the `flowOn` operator can be used to control the thread. > `flowOn()` is like `subscribeOn()` in RxJava **Dispatchers**: They help in deciding the thread on which the work has to be done. There are majorly three types of Dispatchers which are **IO, Default, and Main**. IO dispatcher is used for network and disk-related tasks. Default is used for CPU-intensive work. The Main is the UI thread of Android. Now, we will learn how to create our Flow using the Flow builder. We can create our Flow for any task using the Flow Builder. ## Creating Flow Using Flow Builder Let's learn it through examples. **1. Move File from one location to another location** Here, we will create our Flow using the Flow Builder for moving the file from one location to another in the background thread and send the completion status on Main Thread. ```kotlin val moveFileflow = flow { // move file on background thread FileUtils.move(source, destination) emit("Done") } .flowOn(Dispatcher.Default) ``` ```kotlin CoroutineScope(Dispatchers.Main).launch { moveFileflow.collect { // when it is done } } ``` **2. Downloading an Image** Here, we will create our Flow using the Flow Builder for downloading the Image which will download the Image in the background thread and keep sending the progress to the collector on the Main thread. ```kotlin val downloadImageflow = flow { // start downloading // send progress emit(10) // downloading... // ...... // send progress emit(75) // downloading... // ...... // send progress emit(100) } .flowOn(Dispatcher.Default) ``` ```kotlin CoroutineScope(Dispatchers.Main).launch { downloadImageflow.collect { // we will get the progress here } } ``` This is how we can create our Flow. > In Kotlin, Coroutine is just the scheduler part of RxJava but now with Flow API coming alongside it, it can be an alternative to RxJava in Android So, now we have a good knowledge of Flow API in Kotlin. We have understood what exactly is Flow API in Kotlin. Now, you can start using the Flow API in your Android project. That's it for now. Thanks [**Amit Shekhar**](https://amitshekhar.me) You can connect with me on: - [Twitter](https://twitter.com/amitiitbhu) - [LinkedIn](https://www.linkedin.com/in/amit-shekhar-iitbhu) - [GitHub](https://github.com/amitshekhariitbhu) - [Facebook](https://www.facebook.com/amit.shekhar.iitbhu) [**Read all of my high-quality blogs here.**](https://amitshekhar.me/blog)
amitiitbhu
1,209,666
Switching between multiple Terraform versions
Hey All, In case you are using Terraform to provision and manage your infrastructure, you normally...
0
2022-10-03T09:45:28
https://dev.to/eelayoubi/switching-between-multiple-terraform-versions-5e4f
terraform, devops, opensource, webdev
Hey All, In case you are using Terraform to provision and manage your infrastructure, you normally install a specific version on your machine (or on your CI servers). But what if you wanted to install another terraform version to test it out? In case you have multiple environment in the same codebase, let's say dev and prod, and you deployed both of them using a fixed terraform version (1.2.7). After a while a new terraform version is available (1.3.1). You can update the version on your machine and test if it works fine, if not re-install the older version ... Which can be a bit cumbersome ... Or, you can use [tfenv](https://github.com/tfutils/tfenv) which is a Terraform version manager. After installing tfenv, you can use the **tfenv list** command to list the available terraform versions you have installed: ``` ~ tfenv list 1.2.7 No default set. Set with 'tfenv use <version>' ``` There is a new terraform version available that I would like to test out. To install it, you just simply run : ``` tfenv install 1.3.1 Installing Terraform v1.3.1 Downloading release tarball from https://releases.hashicorp.com/terraform/1.3.1/terraform_1.3.1_darwin_amd64.zip ######################################################################### 100.0% Downloading SHA hash file from https://releases.hashicorp.com/terraform/1.3.1/terraform_1.3.1_SHA256SUMS Not instructed to use Local PGP (/usr/local/Cellar/tfenv/3.0.0/use-{gpgv,gnupg}) & No keybase install found, skipping OpenPGP signature verification Archive: /var/folders/jl/tfyd0yns2xxgb_58fj3ljslc0000gn/T/tfenv_download.XXXXXX.ouHdm2P4/terraform_1.3.1_darwin_amd64.zip inflating: /usr/local/Cellar/tfenv/3.0.0/versions/1.3.1/terraform Installation of terraform v1.3.1 successful. **To make this your default version, run 'tfenv use 1.3.1'** ``` List down all the installed terraform again: ``` tfenv list * 1.3.1 (set by /usr/local/Cellar/tfenv/3.0.0/version) 1.2.7 ``` You can see that the latest version is selected now. If you'd like to switch to the old version, you simply run: `tfenv use VERSION` So: `tfenv use 1.2.7` Now you can see the default selected version: ``` tfenv use 1.2.7 Switching default version to v1.2.7 Default version (when not overridden by .terraform-version or TFENV_TERRAFORM_VERSION) is now: 1.2.7 ➜ ~ tfenv list 1.3.1 * 1.2.7 (set by /usr/local/Cellar/tfenv/3.0.0/version) ``` This way you can test the newer version in your dev codebase in an easy way with just switching versions. I hope you enjoyed this short blog. Thank you for reading!
eelayoubi
1,210,000
How to create SBOMs for free
A recent Executive Order from the Biden Whitehouse instructs various government agencies to take...
0
2022-10-03T17:07:38
https://dev.to/codesec/how-to-create-sboms-for-free-with-codesec-by-contrast-232
opensource, javascript, security, java
{% embed https://youtu.be/AH-TLkaIeoY %} A recent [Executive Order](https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/) from the Biden Whitehouse instructs various government agencies to take action to improve our nation’s cybersecurity. One of those actions is to provide guidance and standards on Software Bills of Materials (SBOMs). In this article, we will explore what SBOMs are and how to easily create them with Contrast Security’s free developer toolset — CodeSec. An SBOM is a standardized format for recording all the constituent parts of a software product. It lists all the open-source libraries used, other third-party proprietary libraries and some metadata about the custom code in the product. The hope is that software purchasers, such as the Federal Government, will be able to use SBOMs in a searchable way for early detection and resolution of vulnerabilities hidden within the various parts of the products they use. Compiling and authoring an SBOM by hand can be a maintenance nightmare. No one in their right mind would want to have the chore of combing through all the libraries used in a project and recording their information in a very rigorous JavaScript Object Notation (JSON) format. Imagine making a mistake only a few hours into such a project as your mind starts to daydream about something more interesting. Luckily, CodeSec by Contrast provides a very simple command for creating SBOMs. After [installing CodeSec](https://bit.ly/3C6tvbB), navigate to the top level of your project in your terminal and run the following command: **Run Command:** _contrast audit --save_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3qhsvshaq2vx5j3rsbtm.png) _**SBOM file saved at the end of the contrast audit output **_ Near the end of the output of the audit command, CodeSec lists the name of the saved SBOM file. Viewing that file reveals it is a very extensive JSON record of the example project and the many libraries it uses. Some other highlights to note are that it lists the SBOM format used, the software vendor and the project name. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i154y6kida4bi7gg49wq.png) Once this file is created, it can be provided to customers or other security professionals related to your organization as needed. Because CodeSec is a command-line tool, it is also possible to build software automation around creating SBOMs. For example, it is possible to add the following line to your project’s [pre-commit Git Hook](https://www.contrastsecurity.com/security-influencers/how-to-scan-for-cybersecurity-risks-on-every-commit-with-codesec-git-hooks?hsLang=en-us) to create the SBOM and then add it to the commit, automatically and free for every commit: _**git add ‘$(contrast audit --save | grep -e “(SBOM)” | cut -d “ ” -f 10)’**_ This command, if placed in pre-commit hook, would run CodeSec’s audit, tell it to create an SBOM, use grep to find the line in the audit output where the SBOM file name is at, then cut that line into pieces at every whitespace and grab the tenth piece — which is the full SBOM file name. After that, it would run Git’s “add” command to add the SBOM file to the commit in progress. An SBOM provides greater transparency into the components that a software product uses, and that knowledge can help decrease cybersecurity risks for the purchasers of that product. CodeSec provides a super simple mechanism for automatically creating SBOMs that then enables even more opportunities for automating the SBOM creation process. [Get started today!](https://bit.ly/3C6tvbB)
orlandov14
1,209,907
Querying installed packages with npm query
Check your installed dependencies using CSS selectors format.
0
2022-11-02T17:39:20
https://dev.to/cloudx/querying-installed-packages-with-npm-query-389f
javascript, webdev, node, react
--- title: Querying installed packages with npm query published: true description: Check your installed dependencies using CSS selectors format. tags: 'javascript, webdev, node, react' cover_image: 'https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/navarroaxel/assets/npm-query-cover.jpg' id: 1209907 --- Do you know if you have any package that makes use of `postinstall` scripts? Do you have a package installed twice? Since version 8.18.0 we can use [`npm query`](https://docs.npmjs.com/cli/v8/commands/npm-query) to find packages in the `node_modules` directory matching a specific query. This is helpful in identifying possible issues and fixing them. ## Why is a postinstall script important? The `postinstall` script allows a package to run code when it's installed and this could be used in a malicious way. The npm security team receives [reports of malware](https://docs.npmjs.com/reporting-malware-in-an-npm-package) in the registry and works to keep the ecosystem safe, but did you check which dependencies are running in `postinstall` scripts? Do you trust them? ```bash npm query ":attr(scripts, [postinstall])" | jq 'map(.name)' ``` Review the list of dependencies. then to uninstall these packages with a `postinstall` script use the following command: ```bash npm query ":attr(scripts, [postinstall])" | jq 'map(.name) | join("\n")' -r | xargs -I {} npm uninstall {} ``` 💡 The `jq` program is essential to filter data in JSON format. ## Finding duplicate packages Sometimes we find some errors on the DevTools' console when React or Angular is being loaded twice on our website. This happens when a dependency is incompatible with the [semver](https://semver.org) of the same package declared on our `package.json`. E.g. We declared `react@^18.2.0` in our `package.json` and some dependency has `react@^16.8.0 || ^17.0.0` as dependency in their own `package.json`. We can find every installed copy of a specific package using: ```bash npm query '#react' ``` Or, if we're looking for old versions of React ```bash npm query '#react:semver(<18.0.0)' ``` 🧠 Remember that the selectors for `npm query` follow the [CSS selectors format](https://www.w3schools.com/cssref/css_selectors.asp). ## Check the dependency licenses Perhaps you can't use code with a specific license in your app. You can check the variety of licenses installed in your `node_modules` printing a dedupe list with this command: ```bash npm query '*' | jq '.[] | select(.license | type == "string") | .license' -r | sort | uniq ``` 🎁 Some packages, like `rimraf`, declare an object in the license field of the `package.json`, so you can use this example for these: ```bash npm query '*' | jq '.[] | select(.license | type == "object") | .license' # or using attr selector npm query ':attr(license, [url])' | jq 'map(.license)' ``` Then you can look up packages with specific licenses, or without a license. ```bash npm query '[license=MIT], [license=ISC], [license=]' ``` ## Other usages You can create the query that matches your needs, but we can review a few additional scenarios. ### Dependencies from Git You can install dependencies from Git branches or tags instead of the _npm registry_ and then you can check which of them are installed in your project. ```bash npm query ':type(git)' ``` Who needs these packages? The `npm why` command can tell us: ```bash npm query ":type(git)" | jq 'map(.name) | join("\n")' -r | xargs -I {} npm why {} ``` ### The peerDependencies If you run your app and you get an error for a missing dependency, check if a peer dependency of your direct dependencies is required. Maybe you should add it in your `package.json` file. ```bash npm query ':root > * > .peer' | jq 'map(.name)' ``` 📚 You could have example pages for [`npm-query`](https://github.com/tldr-pages/tldr/blob/main/pages/common/npm-query.md) and [`jq`](https://github.com/tldr-pages/tldr/blob/main/pages/common/jq.md) in your terminal for quick usage thanks to the [tldr-pages](https://github.com/tldr-pages/tldr) open source project. ## Conclusion Using `npm query` in conjunction with `jq` you can check some constraints in your dependencies to make sure you don't have duplicate dependencies, or licenses that are incompatible with your app or company's policy. I hope this helps you to resolve your issues with npm packages. Tell me in the comments which queries you found useful.
navarroaxel
1,209,942
Free Online Courses for Today - October 3, 2022
Free Online Courses for Today .. Traffic Driving Mastery 2023 |Sell Anything to Anyone...
0
2022-10-03T14:49:28
https://dev.to/theprogramminbuddyclub/free-online-courses-for-today-october-3-2022-4nmm
marketing, tutorial, beginners
Free Online Courses for Today .. Traffic Driving Mastery 2023 |Sell Anything to Anyone Online . https://theprogrammingbuddy.club/course/hz-traffic-driving-mini-course/ . Content Marketing Mastery 2023 | Share Values to Earn More . https://theprogrammingbuddy.club/course/content-marketing-mini-course/ . Facebook Mastery 2023 | Levearge the Largest Online Traffic . https://theprogrammingbuddy.club/course/henry-zhang-facebook-marketing-mini-course/ . Leadership Strategies Mastery 2023 | How to Lead and Succeed . https://theprogrammingbuddy.club/course/henry-zhang-leadership-strategies-mini-course/ . Introduction to Reverse Osmosis Desalination . https://theprogrammingbuddy.club/course/introduction-to-reverse-osmosis-desalination/ . Email Monetization Mastery 2023 | Turn Contacts Into Incomes . https://theprogrammingbuddy.club/course/henry-zhang-email-monetization-mini-course/ . Learn AND Develop Winning Prototypes . https://theprogrammingbuddy.club/course/learn-and-develop-winning-prototypes/ . Certified Professional in PHP Language - Practice Test . https://theprogrammingbuddy.club/course/certified-professional-in-php-language-practice-test/ . Certified Professional in C# Programming - Practice Test . https://theprogrammingbuddy.club/course/certified-professional-in-csharp-programming-practice-test/ . Introduction to Drinking Water Treatment . https://theprogrammingbuddy.club/course/introduction-to-drinking-water-treatment/ . Becoming A Sales Professional . https://theprogrammingbuddy.club/course/building-sales-relationships-networking/ . Becoming A Sales Professional . https://theprogrammingbuddy.club/course/building-sales-relationships-networking/ . Best of Website Traffic 2022: SEO, Facebook Ads & Google Ads . https://theprogrammingbuddy.club/course/website-traffic-2022/ . Design Profitable 3D Leon NFT for METAVERSE and NFT Markets . https://theprogrammingbuddy.club/course/design-profitable-3d-nft/ . 10x Your Social Skills & Connect With People . https://theprogrammingbuddy.club/course/master-your-social-skills/ . How To MAKE Comics - From concept, to pages, to publishing . https://theprogrammingbuddy.club/course/how-to-make-comics/ . poultry farming Broiler farming crash course . https://theprogrammingbuddy.club/course/poultry-farming-broiler-farming-crash-course/ . poultry farming viral diseases threaten poultry industry . https://theprogrammingbuddy.club/course/poultry-farming-viral-diseases-threaten-poultry-industry/ . Passive Income Mastery 2023 | Let Money Work for You Instead . https://theprogrammingbuddy.club/course/hz-passive-income-mini-course/ . poultry farming Bacterial diseases hindering satisfying prod . https://theprogrammingbuddy.club/course/poultry-farming-bacterial-diseases-hindering-satisfying-prod/ . master layer farm management the business of millionaires . https://theprogrammingbuddy.club/course/master-layer-farm-management-the-business-of-millionaires/ . Facebook Ads Targeting Strategies For Success Fast 2022 . https://theprogrammingbuddy.club/course/facebook-ads-marketing-targeting-strategy-2021-2020/ . Facebook Pixel Tracking Shopify ~ Apple iOS14 ~ Ecommerce . https://theprogrammingbuddy.club/course/facebook-ads-pixel-shopify-apple-ios14-ecommerce-wordpress/ . Facebook Ads Google My Business & Google Ads (Adwords) 2022 . https://theprogrammingbuddy.club/course/digital-marketing-with-google-my-business-seo-website-local-listing/ . Configuración y Optimizacion de tu Página de Facebook 2022 . https://theprogrammingbuddy.club/course/configuracion-optimizacion-pagina-facebook-marketing-digital-2021/ . Digital Marketing Business Online For Free Social Media 2022 . https://theprogrammingbuddy.club/course/social-media-marketing-digital-marketing-masterclass-for-beginners/ . Run Facebook Ads For Customer Engagement & Followers ~ BASIC . https://theprogrammingbuddy.club/course/grow-fan-page-facebook-marketing-page-likes-ad-offers-messages-pixel/ . Build Shopify store & Run Facebook Page Likes Ad In 2022 . https://theprogrammingbuddy.club/course/build-shopify-ecommerce-website-30-min-zero-experience-2020-2021-2022/ . Como crear y configurar tu canal de Youtube desde cero 2022 . https://theprogrammingbuddy.club/course/como-crear-y-configurar-tu-canal-de-youtube-marketing-video-movil/ . Marketing en Facebook Ads - Leads /Clientes Potenciales 2022 . https://theprogrammingbuddy.club/course/marketing-facebook-ads-leads-clientes-ventas-2020-2021-social-media/ . Sell Products with Facebook Ads Fast On Shopify 2022 . https://theprogrammingbuddy.club/course/shopify-dropshipping-facebook-ads-ecommerce-masterclass-2020-2021-2022/ . Marketing en Facebook Ads -Ecommerce para Ventas Online 2022 . https://theprogrammingbuddy.club/course/marketing-digital-facebook-ads-ecommerce-ventas-online-dropshipping/ . Facebook Ads And Marketing - Lead Generation Pro - 2022 . https://theprogrammingbuddy.club/course/facebook-marketing-for-lead-generation-2020/ . Facebook Ads & Facebook Marketing Funnel Crash Course- 2022 . https://theprogrammingbuddy.club/course/facebook-marketing-social-media-marketing-advertising-strategy-ads/ . Estrategias Pro de Targeting de Audiencia con Facebook Ads . https://theprogrammingbuddy.club/course/estrategias-pro-targeting-audiencia-facebook-ads-digital-social-media/ . Facebook Marketing & Facebook Ads Course For Beginners . https://theprogrammingbuddy.club/course/mastery-on-facebook-marketing-facebook-advertising-ads-digital/ . C++ Programming for Beginners . https://theprogrammingbuddy.club/course/c-programming-for-everyone/ . Run Digital Marketing Ad Using Google Adwords Express 2022 . https://theprogrammingbuddy.club/course/digital-marketing-google-adwords-express-seo-ppc-advertising-2018/ . Digital Marketing Business With Google My Business - 2022 . https://theprogrammingbuddy.club/course/online-marketing-with-google-my-business-digital-marketing/ . Run Search Ad In Google Ads & Easy SEO For Beginners-2022 . https://theprogrammingbuddy.club/course/digital-marketing-google-ads-adwords-search-seo-ppc/ . Accredited Professional Angel Oracle Card Reading Course . https://theprogrammingbuddy.club/course/angel-oracle-card-reading-diploma-certificate-professional-accredited/ . Certified Professional in Python Programming - Practice Test . https://theprogrammingbuddy.club/course/certified-professional-in-python-programming-practice-test/ . Sales Skills Training: Give a Winning Sales Presentation . https://theprogrammingbuddy.club/course/how-to-give-a-sales-presentation/ . Journalism: Conduct Great Media Interviews . https://theprogrammingbuddy.club/course/how-to-conduct-interviews/ . 2022 Become A Certified React Developer: Practice Tests . https://theprogrammingbuddy.club/course/certified-react-redux-javascript-developer-practice-tests/ . Sales Skills Training: Free Sales Generation Seminars . https://theprogrammingbuddy.club/course/how-to-deliver-a-free-sales-generation-seminar/ . Master the Art Of Self-Confidence – Unshakable Confidence . https://theprogrammingbuddy.club/course/power-of-confidence-full-course2022/ . Kick Out Depression -Complete Treatment 2022 . https://theprogrammingbuddy.club/course/kick-out-depression-complete-curetreatment-2022/ . 2022 Become A Certified Node JS Developer: Practice Tests . https://theprogrammingbuddy.club/course/become-a-certified-node-js-javascript-developer/ . Persuasion: Give a Persuasive Presentation . https://theprogrammingbuddy.club/course/how-to-give-a-persuasive-speech/ . .. Happy Learning The Programming Buddy Club!
theprogramminbuddyclub
1,209,963
Ruby & Active Record Associations
In this guide you will learn: How to declare associations between models using Active Record How...
0
2022-10-03T16:52:08
https://dev.to/meganmoulos/ruby-active-record-associations-1ij7
ruby, rails, tutorial
In this guide you will learn: - How to declare associations between models using Active Record - How to understand the different types of associations available with Active Record - How to use the methods automatically added to your models after creating these associations Using associations with Active Record is very powerful and an important part of using Ruby with databases. An **association** is a connection between two Active Record **models**. These associations provide built-in methods to make your databases easier to work with. This walkthrough assumes that you already understand how to create migrations and models. There are 6 types of associations: - `belongs_to` - `has_one` - `has_many` - `has_many :through` - `has_one :through` - `has_and_belongs_to_many` To figure out which type of association that fits your needs, it is helpful to create an ** Entity Relationship Diagram (ERD)**. There are helpful tools online to create your own ERD quickly and easily, but you can also use pen and paper! - [Lucidchart](https://www.lucidchart.com/pages/examples/er-diagram-tool) - [Smartdraw](https://www.smartdraw.com/entity-relationship-diagram/er-diagram-tool.htm) - [dbdiagram.io](https://dbdiagram.io/home) For this walkthrough, we will use dbdiagram.io. --- ##belongs_to, has_many ![belongs to association](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q05zy10epg8pfp04edww.png) The `belongs_to` association sets up a connection between two models, where the instance of one model "belongs to" the second model. In this example we have one author who has written many books. In this case, each book `belongs_to` an author. This association is made through the **foreign key**. > Note: From [Flatiron School docs](https://learn.co/lessons/activerecord-associations-review) - "Foreign keys are columns that refer to the primary key of another table. Conventionally, foreign keys in Active Record are comprised of the name of the model you're referencing, and _id. So for example if the foreign key was for a posts table it would be post_id." Read more about foreign keys at [The Odin Project](https://www.theodinproject.com/lessons/ruby-on-rails-active-record-associations). Here is the corresponding code: ```javascript class Book < ActiveRecord::Base belongs_to :author end ``` Note that the `belongs_to` association must use the singular term ("author"). From the [official docs](https://guides.rubyonrails.org/association_basics.html): "This is because Rails automatically infers the class name from the association name. If the association name is wrongly pluralized, then the inferred class will be wrongly pluralized too." The other side of the coin for this particular example, the author's `has_many` relationship, would look like this: ```javascript class Author < ActiveRecord::Base has_many :books end ``` --- ##belongs_to, has_one If we changed our example above so that each author _only_ wrote a single book, we could use the `has_one` association: ```javascript class Author < ActiveRecord::Base has_one :book end ``` Notice that `book` is singular. This may seem intuitive, but it is very important to note when to use singular or plural cases when writing your associations. The Book model would remain the same in this case, reading `belongs_to: author` in the singular. --- ##has_many, through: This association is often used to set up a many-to-many connection with another model. The declaring model can be matched with instances of another model _through_ a third, connecting model. For example, imagine a hospital with doctors that see many patients _through_ the patients' appointments. The diagram would look like this: ![has many through](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sp5p3rdhp16dvndiltr0.png) Each doctor has many patients _through_ the appointments table. The patient also has many doctors _through_ the appointments table. Here is the corresponding association code: ```javascript class Doctor < ActiveRecord::Base has_many :appointments has_many :patients, through: :appointments end class Appointment < ActiveRecord::Base belongs_to :doctor belongs_to :appointment end class Patient < ActiveRecord::Base has_many :appointments has_many :doctors, through: :appointments end ``` Then new join models are automatically created for the newly associated objects. --- ##has_one, through: The `has_one, through:` association is similar to the `has_many, through:` association because they both create join models automatically. The difference is in the syntax, and that there is a one to many relationship. In our example above, imagine a patient only had one doctor, through the patient's appointments: ```javascript class Doctor < ActiveRecord::Base has_many :appointments has_many :patients, through: :appointments end class Appointment < ActiveRecord::Base belongs_to :doctor belongs_to :appointment end class Patient < ActiveRecord::Base has_many :appointments has_one :doctor, through: :appointments end ``` --- ##has_and_belongs_to_many This association is rarely used, and there is a blog post titled "[Why You Don’t Need Has_and_belongs_to_many Relationships](https://flatironschool.com/blog/why-you-dont-need-has-and-belongs-to-many/) explaining why. From the Rails docs: > The simplest rule of thumb is that you should set up a has_many :through relationship if you need to work with the relationship model as an independent entity. If you don't need to do anything with the relationship model, it may be simpler to set up a has_and_belongs_to_many relationship (though you'll need to remember to create the joining table in the database). --- ##Bonus: Polymorphic Association Polymorphic association allows us to connect a model to multiple other models on a single association. Look at this table provided by the [Rails Active Record documentation](https://guides.rubyonrails.org/association_basics.html): ![polymorphic](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lgkvm35odj0d9ll4hif4.png) A polymorphic `belongs_to` declaration sets up an interface that any other model can use. For example, from an instance of an Employee model, a collection of pictures can be retrieved using @employee.pictures. You could also retrieve @product.pictures by the same logic. Find more information, as well as helpful tips and tricks, in the [official documentation here](https://guides.rubyonrails.org/association_basics.html).
meganmoulos
1,209,964
Keeping your development resources organized with Notion
As a developer, I have a lot of learning resources and I quickly realized that keeping stuff in my...
0
2022-10-03T15:39:48
https://bereghici.dev/blog/keeping-your-development-resources-organized-with-notion
notion, productivity, programming, webdev
As a developer, I have a lot of learning resources and I quickly realized that keeping stuff in my browser's bookmark doesn't work for me. I needed a tool to keep everything in one place. I started to use [Notion](https://www.notion.so/) and it didn't disappoint me. In this article I want to share how I structured all my resources using this tool. ## Structure This is what my notion "homepage" looks like. I divided the articles and tutorials into different sections. ![Main notion page](https://res.cloudinary.com/bereghici-dev/image/upload/w_1024,q_auto,f_auto,b_rgb:e6e9ee/bereghici-dev/blog/main_notion_page_chipkg) Each section is divided into subsections. This allows me to keep the resources related to one topic and it helps me find things faster. ![Frontend notion page](https://res.cloudinary.com/bereghici-dev/image/upload/w_1024,q_auto,f_auto,b_rgb:e6e9ee/bereghici-dev/blog/frontend_notion_page_kmd17o) ## Bookmarks Each subsection has a database where I store the bookmarks related to this specific topic. I'm using [Save to Notion](https://chrome.google.com/webstore/detail/save-to-notion/ldmmifpegigmeammaeckplhnjbbpccmm?hl=en) chrome extension to bookmark my links. The benefits of keeping the bookmarks in Notion are that you can add tags or notes, and you can filter or sort them by different criteria. ![Bookmarks notion page](https://res.cloudinary.com/bereghici-dev/image/upload/w_1024,q_auto,f_auto,b_rgb:e6e9ee/bereghici-dev/blog/accessibility_bookmarks_sxjrmo) ## Task List The importance of practice in programming cannot be ignored. "Practice makes a man perfect", they said. Often, just reading a tutorial or a book is not enough, you have to get your hands dirty with that specific technology/ pattern/ language/ whatever you learned. In the task list I define small side-projects or things I need to practice. This is a great way to assess the progress. ![A Notion task list](https://res.cloudinary.com/bereghici-dev/image/upload/w_1024,q_auto,f_auto,b_rgb:e6e9ee/bereghici-dev/blog/task_list_notion_page_vknph3) ## Reading list The Reading list is my collection of books. I found that taking notes while reading a book helps me process the information better. Also, I like being able to go back and search my notes and quickly find the most essential information from books I've read. ![A Notion reading list](https://res.cloudinary.com/bereghici-dev/image/upload/w_1024,q_auto,f_auto,b_rgb:e6e9ee/bereghici-dev/blog/reading_list_notion_page_p2s7hx) ## Saved Tweets A majority of the most valuable information I consume online comes from tweets and threads. The goal is to keep everything in one place and hopefully there is a bot named [Save To Notion](https://twitter.com/SaveToNotion) that can save the tweets or threads directly in your notion by tagging this bot on a specific tweet. ![A Notion page for saved tweets](https://res.cloudinary.com/bereghici-dev/image/upload/w_1024,q_auto,f_auto,b_rgb:e6e9ee/bereghici-dev/blog/saved_twitter_notion_page_enaszt) ## RSS Feed I bookmarked many useful blogs, but it was annoying to open each link manually to see if there is new content. A common way to follow the new content is using RSS feeds. Unfortunately, Notion doesn't provide this functionality. I solved this problem by creating a small application with Rust that allows you to manage the RSS sources in a separate notion page and daily reads the new content from your sources and saves them in a notion feed. The project and the setup instructions can be found here: [https://github.com/abereghici/notion-feed.rs](https://github.com/abereghici/notion-feed.rs) This is how looks the RSS sources: ![A Notion page for managing RSS sources](https://res.cloudinary.com/bereghici-dev/image/upload/w_1024,q_auto,f_auto,b_rgb:e6e9ee/bereghici-dev/blog/rss_source_notion_page_wr5nmz) This is the RSS feed: ![A RSS Feed in notion page](https://res.cloudinary.com/bereghici-dev/image/upload/w_1024,q_auto,f_auto,b_rgb:e6e9ee/bereghici-dev/blog/rss_feed_notion_page_zgt3vc) ## Conclusion Notion is a great and flexible tool that can increase your productivity. It comes with a lot of templates that can cover all your needs. If you find other useful use cases, share them with us in the comments.
abereghici
1,209,990
Batch script that scrolling each folder and copying the content of this folders
i have list of folders, each one has a list of files with the same extension i want to write a batch...
0
2022-10-03T16:31:56
https://dev.to/houcemouni/batch-script-that-scrolling-each-folder-and-copying-the-content-of-this-folders-1bn8
i have list of folders, each one has a list of files with the same extension i want to write a batch script to scroll every folder and copy all the files on each folders ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/un2aqfsudrijbucntwau.png)
houcemouni
1,210,148
XML DataSource SourceType="SQL"
Örnek &lt;tabpage Caption="Kaynak İzle"&gt; &lt;section Caption="Teklifler"...
0
2022-10-03T19:08:17
https://dev.to/fsatihin/datasource-sourcetypesql-4jkh
uyumsoft
> Örnek ``` <tabpage Caption="Kaynak İzle"> <section Caption="Teklifler" CaptionVisibility="True" Visibility="True"> <row> <cell colspan="1"> <control FieldName="Grid1" ControlType="GridEdit" Caption="Teklifler" HorizontalScrollBarMode="Hidden" ServerAttribute="KeyFieldName=OFFER_M_ID"> <DataSource SourceType="SQL" Source=" SELECT OFM.OFFER_M_ID, OFD.OFFER_D_ID, TO_CHAR(OFM.DOC_DATE,'DD.MM.YYYY') AS TEKLIF_TARIH, OFM.DOC_NO AS TEKLIF_NO, ITE.DCARD_CODE AS STOK_KOD, ITE.DCARD_NAME AS STOK_AD, OFD.QTY AS MIKTAR, OFD.AMT_WITH_DISC_TRA / OFD.QTY AS BIRIM_FIYAT FROM PSMT_OFFER_M OFM LEFT JOIN PSMT_OFFER_D OFD ON OFM.OFFER_M_ID = OFD.OFFER_M_ID LEFT JOIN INVW_ITEM_TABLES ITE ON OFD.DCARD_ID = ITE.DCARD_ID WHERE 1=1 AND OFD.OFFER_M_ID IN (SELECT ORD.SOURCE_M_ID FROM PSMT_ORDER_D ORD WHERE ORD.ORDER_M_ID = {Id})"/> <GridColumn FieldName="TEKLIF_TARIH" ControlType="DateEdit" Caption="Teklif Tarih" Width="45"> </GridColumn> <GridColumn FieldName="TEKLIF_NO" ControlType="TextEdit" Caption="Teklif No" Width="45"> </GridColumn> <GridColumn FieldName="STOK_KOD" ControlType="TextEdit" Caption="Stok Kod" Width="75"> </GridColumn> <GridColumn FieldName="STOK_AD" ControlType="TextEdit" Caption="Stok Ad" Width="250"> </GridColumn> <GridColumn FieldName="MIKTAR" ControlType="SpinEdit" Caption="Miktar" Width="45"> </GridColumn> <GridColumn FieldName="BIRIM_FIYAT" ControlType="SpinEdit" Caption="Birim Fiyat" Width="45"> </GridColumn> </control> </cell> </row> </section> <section Caption="Talepler" CaptionVisibility="True" Visibility="True"> <row> <cell colspan="1"> <control FieldName="Grid2" ControlType="GridEdit" Caption="Talepler" HorizontalScrollBarMode="Hidden" ServerAttribute="KeyFieldName=REQUEST_M_ID"> <MasterGrid MasterProperty="Grid1" MasterKey="OFFER_M_ID" DetailKey="OFFER_M_ID" /> <DataSource SourceType="SQL" Source=" SELECT concat('GeneralCard.aspx?CommandName=RequestMCollection.Analyze&amp;ObjectId=',RQM.REQUEST_M_ID) AS URL, TO_CHAR(RQM.DOC_DATE,'DD.MM.YYYY') AS TALEP_TARIH, RQM.DOC_NO AS TALEP_NO, REG.REGISTER_FULL_NAME AS PERSONEL, ITE.DCARD_CODE AS STOK_KOD, ITE.DCARD_NAME AS STOK_AD, RQD.QTY AS MIKTAR, RQD.NOTE1 AS ACIKLAMA1, RQD.NOTE2 AS ACIKLAMA2, RQD.NOTE3 AS ACIKLAMA3, RQD.NOTE_LARGE AS NOT1 FROM PSMT_REQUEST_M RQM LEFT JOIN PSMT_REQUEST_D RQD ON RQM.REQUEST_M_ID = RQD.REQUEST_M_ID LEFT JOIN INVW_ITEM_TABLES ITE ON RQD.DCARD_ID = ITE.DCARD_ID LEFT JOIN HRMD_REGISTER REG ON RQD.REGISTER_ID = REG.REGISTER_ID WHERE 1=1 AND RQD.REQUEST_D_ID IN (SELECT OFD.SOURCE_D_ID FROM PSMT_OFFER_D OFD WHERE OFD.OFFER_M_ID ={Grid1.OFFER_M_ID} AND OFD.OFFER_D_ID={Grid1.OFFER_D_ID})"/> <GridColumn FieldName="TALEP_TARIH" ControlType="TextEdit" Caption="Talep Tarih" Width="50"></GridColumn> <GridColumn FieldName="URL" ControlType="LinkEdit" TextField="TALEP_NO" Caption="Talep No" Width="50" NavigateUrlFormatString="{0}" ServerAttribute="PropertiesHyperLinkEdit-Target=_blank"></GridColumn> <GridColumn FieldName="PERSONEL" ControlType="TextEdit" Caption="Personel" Width="50"></GridColumn> <GridColumn FieldName="STOK_KOD" ControlType="TextEdit" Caption="Stok Kod" Width="75"></GridColumn> <GridColumn FieldName="STOK_AD" ControlType="TextEdit" Caption="Stok Ad" Width="250"></GridColumn> <GridColumn FieldName="MIKTAR" ControlType="SpinEdit" Caption="Miktar" Width="50"></GridColumn> <GridColumn FieldName="ACIKLAMA1" ControlType="TextEdit" Caption="Açıklama1" Width="50"></GridColumn> <GridColumn FieldName="ACIKLAMA2" ControlType="TextEdit" Caption="Açıklama2" Width="50"></GridColumn> <GridColumn FieldName="ACIKLAMA3" ControlType="TextEdit" Caption="Açıklama3" Width="50"></GridColumn> <GridColumn FieldName="NOT1" ControlType="TextEdit" Caption="Not" Width="50"></GridColumn> <GridColumn FieldName="NOT1" ControlType="TextEdit" Caption="Not" Width="50"></GridColumn> </control> <control FieldName="btnDbSave" ControlType="Button" Caption="Ara" Width="100"> <ClientSideEvents Click="function (s,e) {GetControl('TownList').PerformCallback('Refresh')} "> </ClientSideEvents> </control> </cell> </row> </section> </tabpage> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w0hhyc8i3t64belyveg2.png) > tabpage, sql results, SourceType SQL ``` <tabpage Caption="Ambalaj İzle"> <section Caption="Ambalaj" CaptionVisibility="True" Visibility="True"> <row> <cell colspan="1"> <control FieldName="Grid1" ControlType="GridEdit" Caption="Ambalaj" HorizontalScrollBarMode="Hidden" ServerAttribute="KeyFieldName=ITEM_M_ID"> <DataSource SourceType="SQL" Source=" SELECT concat('GeneralCard.aspx?CommandName=PackageMCollection.Analyze&amp;ObjectId=',IPM.PACKAGE_ID) AS URL, IPM.PACKAGE_NO AS PALET_NO, WHO.WHOUSE_CODE AS DEPO_KOD, ITE.ITEM_CODE AS STOK_KOD, ITE.ITEM_NAME AS STOK_AD, LOT.LOT_CODE AS PARTI, A01.ITEM_ATTRIBUTE_CODE AS OZELLIK1, WHL.LOCATION_CODE AS RAF, ROUND(IPM.QTY,2) AS PALET_ICI_MIKTAR, ROUND(IPM.QTY * ITE.NET_WEIGHT,2) AS AGIRLIK_KG FROM INVD_PACKAGE_M IPM LEFT JOIN INVD_ITEM ITE ON IPM.ITEM_ID = ITE.ITEM_ID LEFT JOIN INVD_WHOUSE WHO ON IPM.WHOUSE_ID = WHO.WHOUSE_ID LEFT JOIN INVD_LOT LOT ON IPM.LOT_ID = LOT.LOT_ID LEFT JOIN INVD_ITEM_ATTRIBUTE A01 ON IPM.ITEM_ATTRIBUTE1_ID = A01.ITEM_ATTRIBUTE_ID LEFT JOIN INVD_BWH_LOCATION WHL ON IPM.BWH_LOCATION_ID = WHL.BWH_LOCATION_ID WHERE 1=1 AND IPM.INPUT_OUTPUT = 1 AND IPM.WORDER_M_ID = {Id}"/> <GridColumn FieldName="URL" ControlType="LinkEdit" TextField="PALET_NO" Caption="Palet No" Width="50" NavigateUrlFormatString="{0}" ServerAttribute="PropertiesHyperLinkEdit-Target=_blank"></GridColumn> <GridColumn FieldName="DEPO_KOD" ControlType="TextEdit" Caption="Depo Kod" Width="45"> </GridColumn> <GridColumn FieldName="STOK_KOD" ControlType="TextEdit" Caption="Stok Kod" Width="75"> </GridColumn> <GridColumn FieldName="STOK_AD" ControlType="TextEdit" Caption="Stok Ad" Width="250"> </GridColumn> <GridColumn FieldName="PARTI" ControlType="TextEdit" Caption="Parti" Width="250"> </GridColumn> <GridColumn FieldName="OZELLIK1" ControlType="TextEdit" Caption="Özellik-1" Width="250"> </GridColumn> <GridColumn FieldName="RAF" ControlType="TextEdit" Caption="Raf" Width="250"> </GridColumn> <GridColumn FieldName="PALET_ICI_MIKTAR" ControlType="SpinEdit" Caption="Palet içi Miktar" Width="45"> </GridColumn> <GridColumn FieldName="AGIRLIK_KG" ControlType="SpinEdit" Caption="Ağırlık" Width="45"> </GridColumn> </control> </cell> </row> </section> </tabpage> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/03czofr5nx6uddsbzr0q.png) > Master Detay ``` <tabpage Caption="Kaynak İzle"> <section Caption="Teklifler" CaptionVisibility="True" Visibility="True"> <row> <cell colspan="1"> <control FieldName="Grid1" ControlType="GridEdit" Caption="Teklifler" HorizontalScrollBarMode="Hidden" ServerAttribute="KeyFieldName=OFFER_M_ID"> <DataSource SourceType="SQL" Source=" SELECT OFM.OFFER_M_ID, OFD.OFFER_D_ID, TO_CHAR(OFM.DOC_DATE,'DD.MM.YYYY') AS TEKLIF_TARIH, OFM.DOC_NO AS TEKLIF_NO, ITE.DCARD_CODE AS STOK_KOD, ITE.DCARD_NAME AS STOK_AD, OFD.QTY AS MIKTAR, OFD.AMT_WITH_DISC_TRA / OFD.QTY AS BIRIM_FIYAT FROM PSMT_OFFER_M OFM LEFT JOIN PSMT_OFFER_D OFD ON OFM.OFFER_M_ID = OFD.OFFER_M_ID LEFT JOIN INVW_ITEM_TABLES ITE ON OFD.DCARD_ID = ITE.DCARD_ID WHERE 1=1 AND OFD.OFFER_M_ID IN (SELECT ORD.SOURCE_M_ID FROM PSMT_ORDER_D ORD WHERE ORD.ORDER_M_ID = {Id})"/> <GridColumn FieldName="TEKLIF_TARIH" ControlType="DateEdit" Caption="Teklif Tarih" Width="45"> </GridColumn> <GridColumn FieldName="TEKLIF_NO" ControlType="TextEdit" Caption="Teklif No" Width="45"> </GridColumn> <GridColumn FieldName="STOK_KOD" ControlType="TextEdit" Caption="Stok Kod" Width="75"> </GridColumn> <GridColumn FieldName="STOK_AD" ControlType="TextEdit" Caption="Stok Ad" Width="250"> </GridColumn> <GridColumn FieldName="MIKTAR" ControlType="SpinEdit" Caption="Miktar" Width="45"> </GridColumn> <GridColumn FieldName="BIRIM_FIYAT" ControlType="SpinEdit" Caption="Birim Fiyat" Width="45"> </GridColumn> </control> </cell> </row> </section> <section Caption="Talepler" CaptionVisibility="True" Visibility="True"> <row> <cell colspan="1"> <control FieldName="Grid2" ControlType="GridEdit" Caption="Talepler" HorizontalScrollBarMode="Hidden" ServerAttribute="KeyFieldName=REQUEST_M_ID"> <MasterGrid MasterProperty="Grid1" MasterKey="OFFER_M_ID" DetailKey="OFFER_M_ID" /> <DataSource SourceType="SQL" Source=" SELECT concat('GeneralCard.aspx?CommandName=RequestMCollection.Analyze&amp;ObjectId=',RQM.REQUEST_M_ID) AS URL, TO_CHAR(RQM.DOC_DATE,'DD.MM.YYYY') AS TALEP_TARIH, RQM.DOC_NO AS TALEP_NO, REG.REGISTER_FULL_NAME AS PERSONEL, ITE.DCARD_CODE AS STOK_KOD, ITE.DCARD_NAME AS STOK_AD, RQD.QTY AS MIKTAR, RQD.NOTE1 AS ACIKLAMA1, RQD.NOTE2 AS ACIKLAMA2, RQD.NOTE3 AS ACIKLAMA3, RQD.NOTE_LARGE AS NOT1 FROM PSMT_REQUEST_M RQM LEFT JOIN PSMT_REQUEST_D RQD ON RQM.REQUEST_M_ID = RQD.REQUEST_M_ID LEFT JOIN INVW_ITEM_TABLES ITE ON RQD.DCARD_ID = ITE.DCARD_ID LEFT JOIN HRMD_REGISTER REG ON RQD.REGISTER_ID = REG.REGISTER_ID WHERE 1=1 AND RQD.REQUEST_D_ID IN (SELECT OFD.SOURCE_D_ID FROM PSMT_OFFER_D OFD WHERE OFD.OFFER_M_ID ={Grid1.OFFER_M_ID} AND OFD.OFFER_D_ID={Grid1.OFFER_D_ID})"/> <GridColumn FieldName="TALEP_TARIH" ControlType="TextEdit" Caption="Talep Tarih" Width="50"></GridColumn> <GridColumn FieldName="URL" ControlType="LinkEdit" TextField="TALEP_NO" Caption="Talep No" Width="50" NavigateUrlFormatString="{0}" ServerAttribute="PropertiesHyperLinkEdit-Target=_blank"></GridColumn> <GridColumn FieldName="PERSONEL" ControlType="TextEdit" Caption="Personel" Width="50"></GridColumn> <GridColumn FieldName="STOK_KOD" ControlType="TextEdit" Caption="Stok Kod" Width="75"></GridColumn> <GridColumn FieldName="STOK_AD" ControlType="TextEdit" Caption="Stok Ad" Width="250"></GridColumn> <GridColumn FieldName="MIKTAR" ControlType="SpinEdit" Caption="Miktar" Width="50"></GridColumn> <GridColumn FieldName="ACIKLAMA1" ControlType="TextEdit" Caption="Açıklama1" Width="50"></GridColumn> <GridColumn FieldName="ACIKLAMA2" ControlType="TextEdit" Caption="Açıklama2" Width="50"></GridColumn> <GridColumn FieldName="ACIKLAMA3" ControlType="TextEdit" Caption="Açıklama3" Width="50"></GridColumn> <GridColumn FieldName="NOT1" ControlType="TextEdit" Caption="Not" Width="50"></GridColumn> <GridColumn FieldName="NOT1" ControlType="TextEdit" Caption="Not" Width="50"></GridColumn> </control> <control FieldName="btnDbSave" ControlType="Button" Caption="Ara" Width="100"> <ClientSideEvents Click="function (s,e) {GetControl('TownList').PerformCallback('Refresh')} "> </ClientSideEvents> </control> </cell> </row> </section> </tabpage> ```
fsatihin
1,222,931
How to Create a New Astro JS App: Cheatsheet
How to create a new Astro JS app: cheatsheet reference guide to spinning up, launching and customising your new Astro project quickly.
0
2022-10-18T09:58:39
https://rodneylab.com/how-to-create-new-astro-js-app/
webdev, node, astro, javascript
--- title: "How to Create a New Astro JS App: Cheatsheet" published: "true" description: "How to create a new Astro JS app: cheatsheet reference guide to spinning up, launching and customising your new Astro project quickly." tags: "webdev, node, astro, javascript" canonical_url: "https://rodneylab.com/how-to-create-new-astro-js-app/" cover_image: "https://dev-to-uploads.s3.amazonaws.com/uploads/articles/69byhzuztcb214vl1vvl.png" --- ## 🚀 Spinning up a New Astro App We see how to create a new Astro JS app in this post. This will be handy equally if you are new to Astro and want to hit the ground running and if you are a seasoned astronaut, but can never remember the spin-up commands. We get a cheatsheet with the commands for a skeleton Astro project and also see how you can add a touch of CI tooling as a bonus. Really hope you find it useful, and please do reach out or drop a comment below if there is something missing. You can find contact details further down the page. ## 🧱 How to Create a New Astro JS App ### How to Create a New Astro JS App - To get going run the Create Astro app command. ```shell pnpm create astro@latest my-new-astro-app && cd $_ && code . ``` Here our project gets created in a new `my-new-astro-app` directory. &ldquo;`&& cd$_`&rdquo; will put us in the new directory when everything is ready. &ldquo;`&& code .`&rdquo; will open up VSCode in the new directory (change this to &ldquo;`&& codium .`&rdquo; or &ldquo;`&& subl .`&rdquo; if you use Codium or Sublime Text). - Skip this step if you want to keep Astro anonymous data collection enabled (default). ```shell pnpm astro telemetry disable ``` - Next you can easily configure your project form the command line. Astro lets you bring your own framework, you just have to configure it. Astro add does this automatically for you if you tell it what you want. ```shell pnpm astro add react svelte vue mdx sitemap tailwind ``` Naturally, you can pick and <strong>choose only the integrations you want</strong>! Get the latest list of <a aria-label="See a full list of availale Astro integrations" href="https://astro.build/integrations/">available Integrations and links to docs</a>. - Out of the box, Astro is ready to ship a static site. This works for most content sites. You can deploy a static site to any popular hosting service. You can make your whole site Server Side Generated (SSG), which lets you add additional edge functionality. Astro add will configure the right adapter for you if you dice to go SSR. ```shell # OPTIONAL: SSR only pnpm astro add cloudflare deno netlify node vercel ``` Again just **pick the adapter for your hosting service*** and skip this if you prefer the default Static Site Generation (SSG) mode. Get the latest list of <a aria-label="Open a list of latest Astro S S R adapters" href="https://docs.astro.build/en/guides/server-side-rendering/">available adapters and links to&nbsp;docs</a>. - Spin up the dev server: ```shell pnpm dev ``` The CLI will give you a link so you can open the new app in your browser, the default is <a aria-label="Open new app in browser" href="http://localhost:3000/">`localhost:3000/`</a>, but the port number may be different if port `3000` is already in use. That&rsquo;s all there is to it! If you are new to Astro, check out the <a aria-label="Open the getting started with Astro guide" href="https://rodneylab.com/getting-started-astro/">Getting Started with Astro Guide for 10 tips to help you hit the ground running</a>. Also see the <a href="https://rodneylab.com/astro-js-tutorial/">Quick start Astro JS tutorial which even goes into publishing your static Astro site</a> on Netlify. ## 🙌🏽 How to Create a New Astro JS App: Wrapping Up In this post, we saw how to create a new Astro JS App. In particular, we saw: - how to use **pnpm to create a new Astro project**, - how you can disable telemetry potentially to **enhance your privacy**, - some auto configuration of your Astro app **using Astro add**. Hope you have found this post useful! I am keen to hear what you are doing with Astro and ideas for future projects. Also let me know about any possible improvements to the content above. ## 🙏🏽 How to Create a New Astro JS App: Feedback Have you found the post useful? Would you prefer to see posts on another topic instead? Get in touch with ideas for new posts. Also if you like my writing style, get in touch if I can write some posts for your company site on a consultancy basis. Read on to find ways to get in touch, further below. If you want to support posts similar to this one and can spare a few dollars, euros or pounds, please <a aria-label="Support Rodney Lab via Buy me a Coffee" href="https://rodneylab.com/giving/">consider supporting me through Buy me a Coffee</a>. Finally, feel free to share the post on your social media accounts for all your followers who will find it useful. As well as leaving a comment below, you can get in touch via <a aria-label="Reach out on Twitter" href="https://twitter.com/messages/compose?recipient_id=1323579817258831875">@askRodney</a> on Twitter and also <a aria-label="Contact Rodney Lab via Telegram" href="https://t.me/askRodney">askRodney on Telegram</a>. Also, see <a aria-label="Get in touch with Rodney Lab" href="https://rodneylab.com/contact/">further ways to get in touch with Rodney Lab</a>. I post regularly on <a aria-label="See posts on Astro" href="https://rodneylab.com/tags/astro/">Astro</a> as well as <a aria-label="See posts on svelte kit" href="https://rodneylab.com/tags/sveltekit/">SvelteKit</a>. Also <a aria-label="Subscribe to the Rodney Lab newsletter" href="https://rodneylab.com/about/#newsletter">subscribe to the newsletter to keep up-to-date</a> with our latest projects.
askrodney
1,222,951
Writing "Writing An Interpreter In Go" In TypeScript
Date: 08/2020, Repository: wafuwafu13/Interpreter-made-in-TypeScript I wrote Writing An...
0
2022-10-25T03:54:45
https://dev.to/wafuwafu13/writing-writing-an-interpreter-in-go-in-typescript-part-i-65k
typescript, go
> Date: 08/2020, Repository: [wafuwafu13/Interpreter-made-in-TypeScript](https://github.com/wafuwafu13/Interpreter-made-in-TypeScript) I wrote [Writing An Interpreter In Go](https://interpreterbook.com/) in TypeScript up to chapter 2. This article will focus on the analysis of let statements. ### environment building I used TypeScript, webpack, Jest, ESLint, Prettier. [first commit](https://github.com/wafuwafu13/Interpreter-made-in-TypeScript/commit/313d7a08d3213a0f89c0c0d2920fea091ce7fe1f) ### Lexer It takes source code as input and returns a sequence of tokens representing that source code as output. Replacing Go [Struct](https://pkg.go.dev/go/types#Struct) with JavaScript [class](https://developer.mozilla.org/ja/docs/Web/JavaScript/Reference/Statements/class) worked. ```go type Lexer struct { input string position int readPosition int ch byte } func New(input string) *Lexer { l := &Lexer{input: input} l.readChar() return l } func (l *Lexer) readChar() { if l.readPosition >= len(l.input) { l.ch = 0 } else { l.ch = l.input[l.readPosition] } l.position = l.readPosition l.readPosition += 1 } ``` -> ```javascript export interface LexerProps { input: string; position: number; readPosition: number; ch: string | number; } export class Lexer<T extends LexerProps> { input: T['input']; position: T['position']; readPosition: T['readPosition']; ch: T['ch']; constructor( input: T['input'], position: T['position'] = 0, readPosition: T['readPosition'] = 0, ch: T['ch'] = '', ) { this.input = input; this.position = position; this.readPosition = readPosition; this.ch = ch; this.readChar(); } readChar(): void { if (this.readPosition >= this.input.length) { this.ch = 'EOF'; } else { this.ch = this.input[this.readPosition]; } this.position = this.readPosition; this.readPosition += 1; } ``` ### AST It is used as an internal representation of the source code. In Go, the type of the `Value` field is `Expression`, but in TypeScript, the type of the `Identifier` class was used as in the `Name` field. ```go type Node interface { TokenLiteral() string String() string } type Statement interface { Node statementNode() } type Expression interface { Node expressionNode() } type Program struct { Statements []Statement } func (p *Program) TokenLiteral() string { if len(p.Statements) > 0 { return p.Statements[0].TokenLiteral() } else { return "" } } func (p *Program) String() string { var out bytes.Buffer for _, s := range p.Statements { out.WriteString(s.String()) } return out.String() } type LetStatement struct { Token token.Token Name *Identifier Value Expression } func (ls *LetStatement) statementNode() {} func (ls *LetStatement) TokenLiteral() string { return ls.Token.Literal } func (ls *LetStatement) String() string { var out bytes.Buffer out.WriteString(ls.TokenLiteral() + " ") out.WriteString(ls.Name.String()) out.WriteString(" = ") if ls.Value != nil { out.WriteString(ls.Value.String()) } out.WriteString(";") return out.String() } type Identifier struct { Token token.Token Value string } func (i *Identifier) expressionNode() {} func (i *Identifier) TokenLiteral() string { return i.Token.Literal } func (i *Identifier) String() string { return i.Value } ``` -> ```javascript export interface ProgramProps { statements: | LetStatement<LetStatementProps>[] | ReturnStatement<ReturnStatementProps>[] | ExpressionStatement<ExpressionStatementProps>[]; } export class Program<T extends ProgramProps> { statements: T['statements']; constructor(statements: T['statements'] = []) { this.statements = statements; } } export interface LetStatementProps { token: Token<TokenProps>; name: Identifier<IdentifierProps>; value?: Identifier<IdentifierProps>; } export class LetStatement<T extends LetStatementProps> { token: T['token']; name?: T['name']; value?: T['value']; constructor(token: T['token']) { this.token = token; } tokenLiteral(): string | number { return this.token.literal; } string(): string { let statements = []; statements.push(this.tokenLiteral() + ' '); statements.push(this.name!.string()); statements.push(' = '); if (this.value != null) { statements.push(this.value.string()); } statements.push(';'); return statements.join(''); } } export interface IdentifierProps { token: Token<TokenProps>; value: string | number; } export class Identifier<T extends IdentifierProps> { token: T['token']; value: T['value']; constructor(token: T['token'], value: T['value']) { this.token = token; this.value = value; } tokenLiteral(): string | number { return this.token.literal; } string(): string | number { return this.value; } } ``` ### Parser It takes input data and builds an abstract syntax tree data structure. Since the Type Guard was cumbersome, I defined a new `DEFAULT` token. [commit](https://github.com/wafuwafu13/Interpreter-made-in-TypeScript/commit/42f923e1b2ea74d42393f6dad17fae868a72447b) ```go func (p *Parser) parseLetStatement() *ast.LetStatement { stmt := &ast.LetStatement{Token: p.curToken} if !p.expectPeek(token.IDENT) { return nil } stmt.Name = &ast.Identifier{Token: p.curToken, Value: p.curToken.Literal} if !p.expectPeek(token.ASSIGN) { return nil } p.nextToken() stmt.Value = p.parseExpression(LOWEST) if p.peekTokenIs(token.SEMICOLON) { p.nextToken() } return stmt } ``` -> ```ts parseLetStatement(): LetStatement<LetStatementProps> { const stmt: LetStatement<LetStatementProps> = new LetStatement( this.curToken, ); if (!this.expectPeek(TokenDef.IDENT)) { return stmt; } stmt.name = new Identifier(this.curToken, this.curToken.literal); if (!this.expectPeek(TokenDef.ASSIGN)) { return stmt; } while (!this.curTokenIs(TokenDef.SEMICOLON)) { this.nextToken(); } return stmt; } ``` ### Testing I used Jest. ```go func TestLetStatements(t *testing.T) { tests := []struct { input string expectedIdentifier string expectedValue interface{} }{ {"let x = 5;", "x", 5}, {"let y = true;", "y", true}, {"let foobar = y;", "foobar", "y"}, } for _, tt := range tests { l := lexer.New(tt.input) p := New(l) program := p.ParseProgram() checkParserErrors(t, p) if len(program.Statements) != 1 { t.Fatalf("program.Statements does not contain 1 statements. got=%d", len(program.Statements)) } stmt := program.Statements[0] if !testLetStatement(t, stmt, tt.expectedIdentifier) { return } val := stmt.(*ast.LetStatement).Value if !testLiteralExpression(t, val, tt.expectedValue) { return } } } ``` -> ```ts describe('testLetStatement', () => { const tests = [ { input: 'let x = 5;', expectedIdentifier: 'x', expectedValue: 5 }, { input: 'let y = true;', expectedIdentifier: 'y', expectedValue: true }, { input: 'let foobar = y;', expectedIdentifier: 'foobar', expectedValue: 'y', }, ]; for (const test of tests) { const l = new Lexer(test['input']); const p = new Parser(l); const program: Program<ProgramProps> = p.parseProgram(); it('checkParserErrros', () => { const errors = p.Errors(); if (errors.length != 0) { for (let i = 0; i < errors.length; i++) { console.log('parser error: %s', errors[i]); } } expect(errors.length).toBe(0); }); it('parseProgram', () => { expect(program).not.toBe(null); expect(program.statements.length).toBe(1); }); const stmt: LetStatement<LetStatementProps> | any = program.statements[0]; it('letStatement', () => { expect(stmt.token.literal).toBe('let'); expect(stmt.name.value).toBe(test['expectedIdentifier']); expect(stmt.value.value).toBe(test['expectedValue']); }); } }); ``` ### Debugging It may just be that I am new to Go and debugging was not the right way to do it, but with TypeScript, I was able to debug the AST structure cleanly. ![Debbuging with Go](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8r4epcgb79eo45xi47kt.png) -> ![Debbuging with TS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jakzda9u4n2f71l0elgv.png)
wafuwafu13
1,222,954
Complete Explanation With Example On Laravel Middleware
In this tutorial, we will learn Laravel middleware. We will learn with custom middleware and also...
0
2022-10-18T10:15:40
https://dev.to/hoangit/complete-explanation-with-example-on-laravel-middleware-26bi
laravel, middleware, tutorial
In this tutorial, we will learn Laravel middleware. We will learn with custom middleware and also Laravel predefined middleware. Middleware is a very important part of every framework not only Laravel. But what is middleware and how this middleware works in every framework like Laravel? ## What is middleware in Laravel? Middleware provides an easy mechanism to inspect and filter HTTP requests before getting the required information from the request. There are lots of predefined middleware in Laravel. All the middleware in Laravel is registered in the `app\Http\Kernel.php` file. If you visit this path, you will see the following file is already registered. ```php <?PHP /* App\Http\Kernel.php */ namespace App\Http; use Illuminate\Foundation\Http\Kernel as HttpKernel; class Kernel extends HttpKernel { /** * The application's global HTTP middleware stack. * These middleware are run during every request to your application. */ protected $middleware = [ // \App\Http\Middleware\TrustHosts::class, \App\Http\Middleware\TrustProxies::class, \Fruitcake\Cors\HandleCors::class, \App\Http\Middleware\PreventRequestsDuringMaintenance::class, \Illuminate\Foundation\Http\Middleware\ValidatePostSize::class, \App\Http\Middleware\TrimStrings::class, \Illuminate\Foundation\Http\Middleware\ConvertEmptyStringsToNull::class, ]; /** * The application's route middleware groups. */ protected $middlewareGroups = [ 'web' => [ \App\Http\Middleware\EncryptCookies::class, \Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class, \Illuminate\Session\Middleware\StartSession::class, // \Illuminate\Session\Middleware\AuthenticateSession::class, \Illuminate\View\Middleware\ShareErrorsFromSession::class, \App\Http\Middleware\VerifyCsrfToken::class, \Illuminate\Routing\Middleware\SubstituteBindings::class, ], 'api' => [ // \Laravel\Sanctum\Http\Middleware\EnsureFrontendRequestsAreStateful::class, 'throttle:api', \Illuminate\Routing\Middleware\SubstituteBindings::class, ], ]; /** * The application's route middleware. * * These middleware may be assigned to groups or used individually. * */ protected $routeMiddleware = [ 'auth' => \App\Http\Middleware\Authenticate::class, 'auth.basic' => \Illuminate\Auth\Middleware\AuthenticateWithBasicAuth::class, 'cache.headers' => \Illuminate\Http\Middleware\SetCacheHeaders::class, 'can' => \Illuminate\Auth\Middleware\Authorize::class, 'guest' => \App\Http\Middleware\RedirectIfAuthenticated::class, 'password.confirm' => \Illuminate\Auth\Middleware\RequirePassword::class, 'signed' => \Illuminate\Routing\Middleware\ValidateSignature::class, 'throttle' => \Illuminate\Routing\Middleware\ThrottleRequests::class, 'verified' => \Illuminate\Auth\Middleware\EnsureEmailIsVerified::class, 'prevent-back-history' => \App\Http\Middleware\PreventBackHistory::class, 'role' => \Spatie\Permission\Middlewares\RoleMiddleware::class, 'permission' => \Spatie\Permission\Middlewares\PermissionMiddleware::class, 'role_or_permission' => \Spatie\Permission\Middlewares\RoleOrPermissionMiddleware::class, ]; } ``` Look at that Kernel.php class, there are three middleware arrays like `$middleware`, `$middlewareGroups` and `$routeMiddleware`. - `$middleware`: These middleware are run during every request to your application. Actually global middleware. - `$middlewareGroups`: The application's route middleware groups. - `$routeMiddleware`: This middleware may be assigned to groups or used individually. We can use it for specific routes only. ## Why do we use middleware? Take an example, If the user is not logged in, the middleware will redirect the user to your application's login screen. and, if the user is logged in, then the middleware will allow the request to proceed further into the application. How we can implement that? Check out your `$routeMiddleware` group, there is an auth middleware. We can use it to fulfill this requirement. ```php <?php /* App\Http\Kernel.php */ protected $routeMiddleware = [ 'auth' => \App\Http\Middleware\Authenticate::class, ]; ``` Now if we use this auth middleware in our route like: ```php /* routes/web.php */ Route::get('/profile', function () { // })->middleware('auth'); ``` Now only a logged-in user can visit this `/profile` URI. For this type of situation, we need a middleware and we can solve this situation using middleware. ## Creating Custom Middleware in Laravel To create a new middleware, there is an artisan command for creating custom middleware using the `make:middleware` Artisan command: ``` php artisan make:middleware EnsureTokenIsValid ``` If you run this command will place a new `EnsureTokenIsValid` class within your `app/Http/Middleware` directory. ```php <?php /* app/Http/Middleware/EnsureTokenIsValid.php */ namespace App\Http\Middleware; use Closure; class EnsureTokenIsValid { /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { return $next($request); } } ``` Now make it useable in our application. Let's update it like below: ```php <?php /* app/Http/Middleware/EnsureTokenIsValid.php */ namespace App\Http\Middleware; use Closure; class EnsureTokenIsValid { /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { if ($request->input('token') !== 'my-secret-token') { return redirect('home'); } return $next($request); } } ``` Now we can use this middleware in our application to filter user requests like: ```php <?php /* routes/web.php */ use App\Http\Middleware\EnsureTokenIsValid; Route::get('/profile', function () { // })->middleware(EnsureTokenIsValid::class); ``` Now you can visit this `/profile` URL only if you can provide a matching token, otherwise, you can not visit this page. ## Registering Middleware If we would like to assign middleware to specific routes, we should first assign the middleware inside `$routeMiddleware` arrays with the key name like: ```php <?php /* App\Http\Kernel.php */ protected $routeMiddleware = [ 'check-token' => \Illuminate\Auth\Middleware\EnsureEmailIsVerified::class, ]; ``` Now we can handle the previous route with middleware in the following way: ```php <?php /* routes/web.php */ use App\Http\Middleware\EnsureTokenIsValid; Route::get('/profile', function () { // })->middleware('check-token'); ``` ## Assign Multiple Middleware to Route We can assign multiple middlewares to route also. If we want to assign multiple middlewares to the route by passing an array of middleware names to the `middleware` method: ```php /* routes/web.php */ Route::get('/profile', function () { // })->middleware(['auth', 'check-token']); ``` ## Excluding Middleware From Route When we assign middleware to a group of routes, we may also need to prevent the middleware from being applied to an individual route within the group. We may accomplish this using the `withoutMiddleware` method: ``` <?php /* routes/web.php */ use App\Http\Middleware\EnsureTokenIsValid; Route::middleware([EnsureTokenIsValid::class])->group(function () { Route::get('/', function () { // }); Route::get('/profile', function () { // })->withoutMiddleware([EnsureTokenIsValid::class]); }); //or Route::middleware(['check-token'])->group(function () { Route::get('/', function () { // }); Route::get('/profile', function () { // })->withoutMiddleware(['check-token']); }); ``` We can also exclude a given set of middleware from an entire `group` of route definitions: ```php <?php /* routes/web.php */ use App\Http\Middleware\EnsureTokenIsValid; Route::withoutMiddleware([EnsureTokenIsValid::class])->group(function () { Route::get('/profile', function () { // }); }); ``` ## Middleware Parameters We can pass parameters in Laravel middleware also. The Laravel middleware can also receive additional parameters. Additional middleware parameters can be passed to the middleware after the $next argument like: ```php <?php * app\Http\Middleware\EnsureUserHasRole.php */ namespace App\Http\Middleware; use Closure; class EnsureUserHasRole { /** * Handle the incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @param string $role * @return mixed */ public function handle($request, Closure $next, $role) { if (! $request->user()->hasRole($role)) { // Redirect... } return $next($request); } } ``` Middleware parameters must be specified when defining the route by separating the middleware name and parameters with a : and if Multiple then parameters should be delimited by commas: ```php /* routes/web.php */ Route::put('/post/{id}', function ($id) { // })->middleware('role:editor'); ``` If you want to learn more about Laravel middleware and how middleware works, then you can enroll in this course to enlarge your knowledge about middleware: ## Terminable Middleware in Laravel Suppose, we need to do something when the HTTP response has been sent to the browser. In this case, we can use terminable middleware. To do so, we have to add `terminate` method on your middleware like: ```php <?php /* app\Http\Middleware\TerminatingMiddleware.php */ namespace Illuminate\Session\Middleware; use Closure; class TerminatingMiddleware { /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { return $next($request); } /** * Handle tasks after the response has been sent to the browser. * * @param \Illuminate\Http\Request $request * @param \Illuminate\Http\Response $response * @return void */ public function terminate($request, $response) { // Handle tasks after the response has been sent to the browser. } } ``` Look at the `terminate` method, It receives both the request and the response. Once we have defined a terminable middleware, we should add it to the list of routes or global middleware in the `app/Http/Kernel.php` file. Source: [laravelia](https://www.laravelia.com/post/complete-explanation-with-example-on-laravel-middleware)
hoangit
1,223,669
Angular 10 | MEAN, Google auth, JWT, Lazyload, upload de archivos, Guards, Pipes, Zona admin, dashboard y mucho más.
2020-2022 Este es un challenge de un curso de Fernando herrera. Sistema de hospitales —...
0
2022-10-19T20:49:13
https://dev.to/dennysjmarquez/angular-10-mean-google-auth-jwt-lazyload-upload-de-archivos-guards-pipes-zona-admin-dashboard-y-mucho-mas-gi4
angular, javascript, mean, spanish
2020-2022 Este es un **challenge** de un curso de [**Fernando herrera**](https://www.udemy.com/course/angular-avanzado-fernando-herrera/)**.** # Sistema de hospitales — para controlar médicos, hospitales y usuarios Les comento que **me he disfrutado este curso** ☕ si señor **fueron casi 2 años** mientras iba echando códigos, aprendiendo **dentro de mi día a día y mis responsabilidades laborales**, vean todos los **Repositorios.** ## **Me divertí, realicé este curso para refrescar conocimientos y obtener nuevos.** Mi perfil en LinkedIn: [**💡Dennys Jose Marquez Reyes 🧠 | LinkedIn**](https://www.linkedin.com/in/dennysjmarquez/) **👍** ![](https://miro.medium.com/max/700/1*4EVYZDTFZ6fsbhHXlZDCMA.png) **Demo:** [https://adminpro-system-hospitals.onrender.com/](https://adminpro-system-hospitals.onrender.com/) **Código fuentes** **Cliente:** [https://github.com/dennysjmarquez/angular-adv-adminpro](https://github.com/dennysjmarquez/angular-adv-adminpro) **Server:** [https://github.com/dennysjmarquez/angular-adv-adminpro-backend](https://github.com/dennysjmarquez/angular-adv-adminpro-backend) ## Bien, comencemos a describir todo lo que hice utilice y aprendí de este maravilloso curso: **MEAN Stack** _Mongo, Express, Angular, Node.js._ --- # Sesión 1 — Front-End --- **Google SignIn protegido por token desde el Front-End hasta el Backend** Uso de librerías de terceros en proyectos de Angular, **gapi** Google Sign-In, **JQuery**, etc. Rutas con configuraciones. Control de versiones y releases. Manejo de módulos, Servicios, Lazyload. Rutas hijas — **ForChild( )**, **@inputs**, **@Outputs** y **@ViewChild** — Referencia a elementos en el HTML. Implementación de **Charts** (Gráficas) de **ng2-charts.** **Reactive Forms,** Validaciones del formulario, uso de **SweetAlert,** Guardar información en el **LocalStorage** Rxjs Observables, **pipes:** **Retry**, **Take**, **filter, map** ## El uso de **interval**, Observable, Observer. ```js returnObservable(): Observable<number> { let i = 0; const ob$ = new Observable((observer: Observer<number>) => { const interval = setInterval(() => { observer.next(i); if (i === 4) { clearInterval(interval); observer.complete(); } if (i === 2) { i = 0; observer.error('i llego al valor 2'); } ++i; }, 1000); }); return ob$; } ``` --- ```js this._intervalSubs = this.returnInterval() .pipe( // Especifica cuantas veces se va a ejecutar el Observable take(10), // Sirve para filtrar los valores y en este caso solo se muestran // los números pares filter((value) => value % 2 === 0), // Este operador recibe la información y la muta map((value) => { return 'Hola mundo ' + (value + 1); }) ) .subscribe( (valor) => console.log('[returnInterval] valor', valor), (error) => console.warn('[returnInterval] Error', error), () => console.log('[returnInterval] Terminado') ); } returnInterval() { return interval(100); } ``` --- ## **Pipe** de Angular para mostrar una imagen de una URL o desde el server ```js import { Pipe, PipeTransform } from '@angular/core'; import { environment } from '@env'; const baseUrl = environment.baseUrl; @Pipe({ name: 'getImage', }) export class GetImagePipe implements PipeTransform { transform(value: any, type: 'users' | 'medicos' | 'hospitals'): any { return value && value.includes('://') ? value : `${baseUrl}/upload/${type}/${value || 'no-imagen'}`; } } ``` ## Implantación de Lazyload con protección de rutas y carga de componentes ```js import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { RouterModule, Routes } from '@angular/router'; import { PagesComponent } from './pages.component'; // Guards import { AuthGuard } from '../guards/auth.guard'; const APP_ROUTES: Routes = [ // Template principal { path: 'dashboard', component: PagesComponent, canLoad: [AuthGuard], canActivate: [AuthGuard], loadChildren: () => import('./pages-child-router.module').then(module => module.PagesChildRouterModule), // Las rutas hijas se cargan con lazyload // children: [], }, ]; const APP_ROUTING = RouterModule.forChild(APP_ROUTES); @NgModule({ declarations: [], imports: [CommonModule, APP_ROUTING], exports: [RouterModule], }) export class PagesRouter {} ``` ## Todo organizado en módulos, buenas prácticas 🤜🏻🤛🏻 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uuin5xp73mm8zbwl6ixw.png) Use **ngZone.run()** para notificar a Angular que refresque la vista, ya que algo sucedió fuera de los **ciclos de vida de Angular** y no lo detecta como se espera que fuese un proceso de Angular, porque es una **librería externa** que hizo un cambio fuera del control de cambio de Angular. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdi0txqcd0hnqe185vt1.png) **Más información sobre ngZone aquí** 👇 [https://dennysjmarquez.dev/magazine/ngzone-como-runoutsideangular-podria-reducir-las-llamadas-de-deteccion-de-cambios-H41jASdhwzMReJe2JUZk/](https://dennysjmarquez.dev/magazine/ngzone-como-runoutsideangular-podria-reducir-las-llamadas-de-deteccion-de-cambios-H41jASdhwzMReJe2JUZk/) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrw68rtx3tovqmc8oy9h.png) ## Sistema completo para identificar al usuario tanto en Google auth como con una cuenta normal en el back google-auth.service.ts ```js import {Injectable} from '@angular/core'; import Swal, {SweetAlertIcon} from 'sweetalert2'; import {AuthService} from './auth.service'; declare var gapi: any; @Injectable({ providedIn: 'root' }) export class GoogleAuthService { constructor(private _authService: AuthService) { } makertGoogleLoginBtn(options: { // Id del botón de Google en el HTML btnSignin: string, // Parámetros para el mensaje de Error si algo falla al iniciar la App para el login errors?: { title?: string, text?: string, icon?: SweetAlertIcon, confirmButtonText?: string }, // Función de se llama luego de un inicio exitoso callbackStartApp: Function }) { // Renderiza el botón de Google gapi.signin2.render(options.btnSignin, { 'scope': 'profile email', 'width': 240, 'height': 50, 'longtitle': false, 'onsuccess': (googleUser) => {}, 'onfailure': console.log }); // Inicia el login con Google this._authService.google.startApp('goole-signin').then((profile: any) => { options.callbackStartApp(profile); }).catch(error => { Swal.fire({ title: options?.errors?.title || 'Error!', text: options?.errors?.text || error?.error?.msg || 'Error desconocido', icon: options?.errors?.icon || 'error', confirmButtonText: options?.errors?.confirmButtonText || 'Ok' }); }); } } ``` auth.service.ts ```js import { Injectable, NgZone } from '@angular/core'; import { LoginGoogleData } from '../interfaces/login-google-data.interface'; import { tap } from 'rxjs/operators'; import { Observable, throwError } from 'rxjs'; import { UserModel } from '../models/user.model'; import { HttpClient } from '@angular/common/http'; import { Router } from '@angular/router'; import { environment } from '@env'; import { LoginForm } from '../interfaces/login-form.interface'; declare var gapi: any; declare var $: any; @Injectable({ providedIn: 'root', }) export class AuthService { baseURL = environment.baseUrl; google_id = environment.GOOGLE_ID public currentUser: UserModel; constructor(private http: HttpClient, private _router: Router, private _ngZone: NgZone) {} google = { /** * * Obtiene una sesión de Google * */ initGoogleAuth: () => { return new Promise((resolve) => { gapi.load('auth2', () => { this.google.startApp['gapiAuth2'] = gapi.auth2; // Retrieve the singleton for the GoogleAuth library and set up the client. const auth2Init = gapi.auth2.init({ client_id: this.google_id, cookiepolicy: 'single_host_origin', // Request scopes in addition to 'profile' and 'email' //scope: 'additional_scope' }); resolve(auth2Init); }); }); }, /** * * Obtiene una sesión de Google y se coloca él escucha del evento clic sobre el botón de Google * * @param btnSignin {string} Id del botón de Google en el HTML */ startApp: (btnSignin: string) => new Promise(async (resolve, reject) => { // Se obtiene una sesión de Google const auth2Init: any = await this.google.initGoogleAuth(); const element = document.getElementById(btnSignin); // Se captura el evento clic en el botón de Google auth2Init.attachClickHandler( element, {}, (googleUser) => { const profile = googleUser.getBasicProfile(); const token = googleUser.getAuthResponse().id_token; $(".preloader").fadeIn(); this.google.login({ token }).subscribe( (resp) => { resolve(profile); }, (error) => { $(".preloader").fadeOut(); reject(error); } ); }, function (error) { alert(JSON.stringify(error, undefined, 2)); } ); }), /** * * Se intensifica en el servidor de la App * * @param gToken {string} Token devuelto por Google */ login: (gToken: LoginGoogleData) => { this.resetCurrentUser(); return this.http.post(`${this.baseURL}/login/google`, gToken).pipe( tap(({ token = '' }: any) => { localStorage.setItem('token', token); }), tap((data: any) => this.setCurrentUser(data)) ); }, /** * * Lleva a cabo el logOut de la App * * @param callback {Function} Función anónima que es llamada luego que se haya hecho el logOut */ logOut: (callback?: Function) => { const logOut = () => { this.resetCurrentUser(); const auth2 = this.google.startApp['gapiAuth2'].getAuthInstance(); auth2.signOut().then(() => { typeof callback === 'function' && this._ngZone.run(() => callback()); }); }; // Por si se pierde la sesión porque se refresca la pagina if (!this.google.startApp['gapiAuth2']) { this.google.initGoogleAuth().then(() => logOut()); } else { logOut(); } }, }; /** * * Obtiene el Token y lo almacena localmente * */ get token(): string { return localStorage.getItem('token') || ''; } /** * * Valida el token este método se usa en auth.guard para conceder el acceso o deniegarlo * en ciertas zonas o paginas también almacena información sensible del usuario * en este servicio, tales como: name, email, img, google, role, uid * * En la prop public user: UserModel de la class * */ validateToken(): Observable<any> { // Obtiene el Token almacenado localmente const token = this.token; // Se chequea primero si el token existe antes de ser enviado al servidor para su validación if (!token) { return throwError('Usuario no logeado'); } return this.http .get(`${this.baseURL}/login/tokenrenew`, { headers: { Authorization: token } }) .pipe( tap(({ token = '' }: any) => { // Almacena el nuevo token localStorage.setItem('token', token); }), tap((data: any) => this.setCurrentUser(data)) ); } loginUser(formData: LoginForm): Observable<any> { this.resetCurrentUser(); return this.http.post(`${this.baseURL}/login`, formData).pipe( tap(({ token = '' }: any) => { localStorage.setItem('token', token); }), tap((data: any) => this.setCurrentUser(data)) ); } resetCurrentUser() { this.currentUser = new UserModel(null, null, null, null, null, null, null); localStorage.removeItem('token'); } private setCurrentUser({ usuario: { name, email, img, google, role, uid } }) { this.currentUser = new UserModel(name, email, '', img, google, role, uid); } } ``` **Models** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxe2w97ksu2p3z74m0f9.png) **Interfaces** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ktdo7m37c9cxaot814p.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxdoobu2zibkb7meh012.png) ## Al respecto, de sí usar Class o Interfaces, les dejo este artículo para más información **Usar Modelos, clases e Interfaces en Angular** 👇 [https://dennysjmarquez.dev/magazine/usar-modelos-clases-e-interfaces-en-angular-lH8cmIS9YrgW0nONyoQe/](https://dennysjmarquez.dev/magazine/usar-modelos-clases-e-interfaces-en-angular-lH8cmIS9YrgW0nONyoQe/) **Uso de import { FormBuilder, FormGroup, Validators } from ‘@angular/forms’;** --- Custom validator o validaciones a medida ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/86juvm9e2as4vgnevn9w.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwkn2vgcju2f674a96d3.png) Mantenimientos de Hospitales, usuarios y médicos ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhs0032m8on3dlhf651c.png) --- ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hb29bgmwhxzciu4d0hwa.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44ci8njqg3b11c4tz449.png) # **Usuarios** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/un99prjntfsiazgzcyqa.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/en5cd8h377kp1bn15u0b.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i2ol20gczjtj0nks24pj.png) # Hospitales ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pddrb1ux8cv5ibo8miiy.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/af3o6i0bbxfh85qubz1m.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8n82s48vjcc7u4ix1o8.png) # Médicos ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3mrkrgmt2waispgthled.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tydl0nqe8ov09iy89q0v.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q90ses4oncivaskwmuoo.png) # Profile ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4qxz7487qgn8ir3sf6n.png) --- # Sesión 2 — Back-End --- Node — Express — MongoDB **Demo:** [https://adminpro-system-hospitals.onrender.com/](https://adminpro-system-hospitals.onrender.com/) **Código fuente Server:** [https://github.com/dennysjmarquez/angular-adv-adminpro-backend](https://github.com/dennysjmarquez/angular-adv-adminpro-backend) --- ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7739i5bzy39nigu3ye5p.png) Uso de **MongoDb compass**, **Mongo Atlas** para alojamiento de la dB y **configuraciones**. Configuaciones como ejemplo: añadir la IP `0.0.0.0/0`, en **Network Access** de **MongoDB Atlas** con lo que abriríamos nuestra dB para que cualquier dirección IP pueda conectarse. 🤘🏻 ## Conectar el Back con Mongo Atlas usando [Mongoosejs](https://mongoosejs.com/) database/config.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qf8z0v2c1obzv296dml3.png) index.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fie0zqu2wvmkf72jn5xt.png) ## Creación de modelos para interactuar con la dB de MongoDB Atlas CRUD Modelos para los Hospitales, Usuarios y Médicos ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z6hq7ws060hcdscg38qq.png) **Schema con referencias y el uso de populate, para agregar información extra o necesaria al esquema en cuestión.** models/hospital.model.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qiuhrx5tp1881q9hjjoh.png) models/medico.model.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ytq9gz4e2thyzd1ecwwp.png) controllers/hospitals.controller.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dn7tnsi3h5odqj5kwrp1.png) controllers/medicos.controller.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jv9wu4ln4kf5038dddu.png) **Manejo de los nombres de los esquemas a medida, con** { collection: ‘hospitales’ } lo podemos personalizar 🤟🏻 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u53t6jlvf18bieabf42w.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/okhgok6gyiwr0zwmykul.png) models/hospital.model.js ```js const { Schema, model } = require('mongoose'); const hospitalSchema = Schema( { name: { type: String, required: true, }, img: { type: String, }, user: { required: true, type: Schema.Types.ObjectID, ref: 'User', }, }, // Por defecto mongoose le agrega al los modelos una s al final del nombre del modelo, // y en este caso sería por defecto “Hospitals” y con esta opción le damos un // nombre personalizado “hospitales” y así va a aparecer en la Db de mongoose { collection: 'hospitales' } ); // Esto para modificar los nombres de los campos retornados de la Db hospitalSchema.method('toJSON', function () { // Al extraer los campos dejan de ser regresados, como por ejemplo // el Password no conviene que se muestre ese valor por seguridad y // por lo tanto no se regresa , igual se extrae el __v por pura estetica const { __v, _id, ...object } = this.toObject(); object.uid = _id; return object; }); module.exports = model('Hospital', hospitalSchema); ``` models/medico.model.js ```js const { Schema, model } = require('mongoose'); const medicoSchema = Schema({ user: { type: Schema.Types.ObjectID, ref: 'User', required: true, }, name: { type: String, required: true, }, img: { type: String, }, hospital: { type: Schema.Types.ObjectID, ref: 'Hospital', required: true, }, }); // Esto para modificar los nombres de los campos que retornados de la Db medicoSchema.method('toJSON', function () { // Al extraer los campos dejan de ser regresados, como por ejemplo // el Password, no conviene que se muestre ese valor por seguridad, // por lo tanto, no se regresa. // Igual se puede extraer el __v por pura estética, también se puede // cambiar si se necesita el _id por uid, se retornaría el object // con los campos modificados. const { __v, _id, ...object } = this.toObject(); object.uid = _id; return object; }); module.exports = model('Medico', medicoSchema); ``` models/usuario.model.js ```js const {ROLES} = require('../constant'); const {Schema, model} = require('mongoose'); const userSchema = Schema({ name: { type: String, required: true, }, email: { type: String, required: true, unique: true, }, password: { type: String, required: true, }, img: { type: String, }, role: { type: String, required: true, default: ROLES.USER_ROLE, }, google: { type: Boolean, default: false, }, }); // Esto para modificar los nombres de los campos retornados de la Db userSchema.method('toJSON', function () { const {__v, _id, password, ...object} = this.toObject(); object.uid = _id; return object; }); module.exports = model('User', userSchema); ``` **Validación del JWT** Haciendo uso del Middleware — middlewares/validate-jwt.middleware.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/stvo50787fzv1plf2s47.png) **El uso de _express-validator para_** _v_alidar los datos enviados al servidor en el body routes/auth.route.js Es usado en las rutas como un Middleware ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nrq2k6u7fu3g3k66v7uy.png) **Validar un MongoID** .isMongoId() ```js check('hospital', 'El id del hospital no es válido').isMongoId(), ``` CRUD de médicos, usuarios y hospitales ## Uso de los modelos de mongoose para obtener los datos, buscar, actualizar y borrar información en la dB Con el uso de los Schema se crea el modelo **El Modelo de Hospitales como ejemplo:** models/hospital.model.js Obtener todos los hospitales data guardada en la collection. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fqdaqtvw924s76yz99d.png) Guardar información. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxc53u2qq7sdjrmk5dsb.png) Actualizar la información. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zqtube2artdny1np78uu.png) Borrar información un hospital. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7gdjl46r8i3ok6k8us06.png) Búsqueda de un Hospital haciendo uso de las expresiones regular. controllers/search.controller.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8bv9w3xpfueit7so144.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4oetgrqjpsa8s9l1k9w.png) Si se usa **find({})** sin parámetros devuelve toda la collection. **Búsquedas en varias colecciones a la vez.** controllers/search.controller.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5zkbj5jg7a2gsgwmfgi.png) **Paginación de los datos haciendo uso de .skip y .limit** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ma8m3xk4virn8kywknu6.png) ## Protección de rutas basadas en JWT y sistema de Roles constant.js ```js /** * * Global constants file * */ module.exports = { ROLES: { ADMIN_ROLE: 'ADMIN_ROLE', USER_ROLE: 'USER_ROLE' } } ``` middlewares/validate-role.middleware.js ```js const { request, response } = require('express'); const UsersModel = require('../models/usuario.model'); const validateRole = (roles = [], paramsUID = false) => async (req = request, res = response, next) => { try { let getParamsUID; const { uid } = req.usuario; // Obtener el usuario del uid const usuario = await UsersModel.findById(uid); if (paramsUID) { getParamsUID = req.params.id; } if (!usuario || !roles.includes(usuario.role) && !(paramsUID && getParamsUID === uid)) { return res.status(403).json({ msg: 'Acceso denegado', }); } next(); } catch (e) { return res.status(403).json({ msg: 'Acceso denegado', }); } }; module.exports = { validateRole, }; ``` middlewares/validate-jwt.middleware.js ```js const { response, request } = require('express'); const jwt = require('jsonwebtoken'); const validateJWT = async (req = request, res = response, next) => { const { authorization: token } = req.headers; try { if (!token) { return res.status(401).json({ msg: 'Token no definido', }); } // Leer Token const data = await jwt.verify(token, process.env.JWT_SECRET); const { uid, role } = data.payLoad; // se pasa al controlador el uid req.usuario = { uid, role }; next(); } catch (e) { res.status(500).json({ msg: 'Token no valido', }); } }; module.exports = { validateJWT, }; ``` **Login con Google y verificación de su Token** helpers/googleVerifyIdToken.helper.js ```js const { OAuth2Client } = require('google-auth-library'); const client = new OAuth2Client(process.env.GOOGLE_ID); const googleVerifyIdToken = async (token) => { const ticket = await client.verifyIdToken({ idToken: token, audience: process.env.GOOGLE_ID, // Specify the CLIENT_ID of the app that accesses the backend }); return { email, name, picture } = ticket.getPayload(); } module.exports = { googleVerifyIdToken } ``` controllers/auth.controller.js ```js const loginGoogle = async (req = request, res = response) => { const { token: G_token } = req.body; try { const { email, name, picture } = await googleVerifyIdToken(G_token); // Se chequea si el usuario existe o se va a crear uno nuevo const userDB = await UsersModel.findOne({ email }); let userNew; if (!userDB) { userNew = new UsersModel({ password: '123456', name, email, google: true, img: picture, }); // Guarda en la Db el user await userNew.save(); } else { userNew = userDB; userNew.google = true; // Guarda en la Db el user await userNew.save(); } const payLoad = { uid: userNew.id, role: userNew.role, }; // Genera un Token de JWT const token = await generateJWT(payLoad); res.json({ token, usuario: userNew }); } catch (e) { console.log(e); res.status(500).json({ msg: 'El Token no es correcto ', }); } }; ``` **Login normal.** controllers/auth.controller.js Uso de **findOne** para devolver la primera conciencien en la collection. ```js const login = async (req = request, res = response) => { const { email, password } = req.body; try { // Verifica el Email const userDb = await UsersModel.findOne({ email }); if (!userDb) { return res.status(404).json({ msg: 'No se ha podido encontrar tu cuenta', }); } // Verifica el Password const validPass = bcrypt.compareSync(password, userDb.password); if (!validPass) { return res.status(400).json({ msg: 'Contraseña incorrecta', }); } const payLoad = { uid: userDb.id, role: userDb.role, }; // Genera un Token de JWT const token = await generateJWT(payLoad); res.json({ token, usuario: userDb }); } catch (e) { res.status(500).json({ msg: 'Error inesperado… revisar logs', }); } }; ``` ## Uso de **express-fileupload** para subir archivos routes/upload.route.js ```js const { Router } = require('express'); const router = Router(); const { ROLES } = require('../constant'); // Middlewares const { validateJWT } = require('../middlewares/validate-jwt.middleware'); const { validateUploads } = require('../middlewares/validate-uploads.middleware'); const fileUpload = require('express-fileupload'); router.use(fileUpload()); // Controllers const { upLoad, returnImg } = require('../controllers/upload.controller'); const { validateRole } = require('../middlewares/validate-role.middleware'); router.put('/:type/:id', [validateJWT, validateRole([ROLES.ADMIN_ROLE], true), validateUploads], upLoad); router.get('/:type/:photo', [validateUploads], returnImg); module.exports = router; ``` controllers/upload.controller.js ```js const { request, response } = require('express'); const { v4: uuidv4 } = require('uuid'); const { upDateImage } = require('../helpers/upDate-image.helper'); const path = require('path'); const fs = require('fs'); const upLoad = async (req = request, res = response) => { try { const { id, type } = req.params; // Valida que se haya mandado un archivo if (!req.files || Object.keys(req.files).length === 0) { return res.status(400).json({ msg: 'Error: No se ha mandado ningún archivo' }); } // Se procesa la imagen const file = req.files.image; const nameSplit = file.name.split('.'); const extFile = nameSplit[nameSplit.length -1].toLowerCase(); // Extensiones Validas permitidas const mimeTypeValid = [ 'image/jpeg', 'image/png', 'image/gif' ]; // Verifica que lo que se envié sea del tipo permitido if(!mimeTypeValid.includes(file.mimetype)){ return res.status(400).json({ msg: 'Error: No es un archivo permitido' }); } // Genera el nuevo nombre del archivo const nameFile = `${ uuidv4() }.${ extFile }`; // Path para guardar el archivo const path = `./uploads/${type}/${nameFile}`; // Mueve la imagen await file.mv(path, (err) =>{ if (err){ console.log(err); res.status(500).json({ msg: 'Error inesperado no se pudo subir la imagen… revisar logs' }); } // Actualizar base de datos upDateImage(type, id, nameFile); res.json({ upLoad: true, nameFile }); }); }catch (e) { console.log(e) res.status(500).json({ msg: 'Error inesperado… revisar logs' }); } } const returnImg = async (req = request, res = response) => { try { const {photo, type} = req.params; let pathImg = path.join(__dirname, `../uploads/${type}/${photo}`); // Si no existe la imagen se manda una por defecto if(!fs.existsSync(pathImg)){ pathImg = path.join(__dirname, `../uploads/no-img.jpg`); } return res.sendFile(pathImg); } catch (e) { console.log(e) res.status(500).json({ msg: 'Error inesperado… revisar logs' }); } } module.exports = { upLoad, returnImg }; ``` helpers/upDate-image.helper.js ```js const fs = require('fs'); const UsersModel = require('../models/usuario.model'); const HospitalsModel = require('../models/hospital.model'); const MedicosModel = require('../models/medico.model'); const deleteImg = (path) => { if (fs.existsSync(path)) { try { fs.unlinkSync(path); } catch (e) { return false; } } }; const upDateImage = async (type, id, nameFile) => { switch (type) { case 'hospitals': { const hospital = await HospitalsModel.findById(id); if (!hospital) { console.log('El id del hospital no existe'); return false; } if (hospital.img) { const oldPath = `./uploads/${type}/${hospital.img}`; // Borrar la imagen anterior deleteImg(oldPath); } hospital.img = nameFile; try { await hospital.save(); return true; } catch (e) { return false; } } case 'medicos': { const medico = await MedicosModel.findById(id); if (!medico) { console.log('El id del medico no existe'); return false; } if (medico.img) { const oldPath = `./uploads/${type}/${medico.img}`; // Borrar la imagen anterior deleteImg(oldPath); } medico.img = nameFile; try { await medico.save(); return true; } catch (e) { return false; } } case 'users': { const user = await UsersModel.findById(id); if (!user) { console.log('El id del hospital no existe'); return false; } if (user.img) { const oldPath = `./uploads/${type}/${user.img}`; // Borrar la imagen anterior deleteImg(oldPath); } user.img = nameFile; try { await user.save(); return true; } catch (e) { return false; } } } }; module.exports = { upDateImage } ``` **Habilitación de una carpeta pública para servir el proyecto de Angular compilado** index.js ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/te5s9kikxjddl2kmf402.png) --- # Sesión 3 — Pruebas unitarias y de integración --- **Demo:** [https://replit.com/@dennysjmarquez/angular-13-unit-test-and-integration-demo](https://replit.com/@dennysjmarquez/angular-13-unit-test-and-integration-demo) {% embed https://replit.com/@dennysjmarquez/angular-13-unit-test-and-integration-demo %} **Código fuente:** [https://github.com/dennysjmarquez/angular-13-unit-test-and-integration](https://github.com/dennysjmarquez/angular-13-unit-test-and-integration) # Las pruebas están separadas en 4 categorías: ## **Básicas** En estas pruebas verán la comprobación de Arrays, La comprobación de los booleans y las diferentes formas de hacer esto Ej. `expect(resp).toBe(true) expect(resp).toBeTrue() expect(resp).toBeTruthy()` `// la Negación puede ser asi o usar uno que evalué un false` `expect(resp).not.toBeTruthy()` También muestro el cómo hacer un test de funciones que están dentro de una class, probando el return de la misma, Pruebas con números usando toBe, string uso de toContain expect(typeof resp).toBe('string') familiarización con la evaluación de expect, siclos de vida del describe de [Jasmine](https://jasmine.github.io/api/3.10/global), tales como beforeAll, beforeEach, afterAll, afterEach y en que caso usar cada uno de ellos. ## Intermedias Esta sección trabaja con pruebas un poco más complejas y reales: 1. Pruebas sobre Event Emitter 2. Formularios 3. Validaciones 4. Saltar pruebas 5. Espías 6. Simular retornos de servicios 7. Simular llamado de funciones Esta sección da fundamentos muy valiosos para realizar pruebas unitarias y de integración Se hacen comprobaciones simples de un componente haciendo usos de cosas simples como estas component = `new Form(new FormBuilder())`, aquí ya se empieza a ver los `spyOn()` para espiar algunos métodos de algunos servicios y hacer a las pruebas en relación con los resultados de estos métodos. ## Intermedias 2 Esta sección se enfoca en las pruebas de integración: 1. Aprender la configuración básica de una prueba de integración 2. Comprobación básica de un componente 3. TestingModule 4. Archivos SPEC generados automáticamente por el AngularCLI 5. Pruebas en el HTML 6. Revisar inputs y elementos HTML 7. Separación entre pruebas unitarias y pruebas de integración Ya aquí empiezo a usar a `TestBed`, `ComponentFixture`, `configureTestingModule` que es una copia limitada de lo que sería el `@NgModule`, pero para las pruebas y donde se va a poder insertar módulos componentes y servicios, también controlo ya aquí lo que es el siclo de control de cambios de Angular mediante el uso de detectChanges para que se puedan hacer pruebas de integración, ya que con esto se actualiza el HTML. En esta sesión ya empiezo a usar a `debugElement.query()` y `By.css` para acceder al HTML y hacer las comprobaciones necesarias en una prueba de integración. ## Avanzadas Esta sección es un verdadero reto, especialmente entre más te vas acercando al final de la misma. Aquí veremos temas como: 1. Revisar la existencia de una ruta 2. Confirmar una directiva de Angular (router-outlet y routerLink) 3. Errores por selectores desconocidos 4. Reemplazar servicios de Angular por servicios falsos controlados por nosotros 5. Comprobar parámetros de elementos que retornen observables 6. Subject 7. Gets En estas pruebas haremos comprobaciones de los params del ActivatedRoute, y comprobaremos la navegación del Router, con toHaveBeenCalledWith Verificando que se llame con los parámetros indicados para la ruta ruta en cuestion. -FIN-
dennysjmarquez
1,223,794
Best Ecommerce Tools to Your Business Growth
There is no doubt that eCommerce has taken the business world by storm. In today's digital age, more...
0
2022-10-19T05:20:36
https://dev.to/jameshoward1203/best-ecommerce-tools-to-your-business-growth-4g87
There is no doubt that eCommerce has taken the business world by storm. In today's digital age, more and more businesses are turning to online platforms to sell their products and services. If you're thinking of starting an eCommerce business, or if you're looking to take your existing business online, you'll need to choose the right tools to help you grow and succeed. In this article, we'll share with you the best [eCommerce tools ](https://www.cloudways.com/blog/ecommerce-tools/)to help you boost your business growth. 1. Shopify Shopify is one of the most popular eCommerce platforms in the world. It's user-friendly, scalable, and comes with everything you need to launch and grow your online store. Plus, it has a wide range of features and integrations that can help you run your business effectively. 2. WooCommerce WooCommerce is a WordPress plugin that turns your WordPress website into a fully-functional online store. It's user-friendly, flexible, and comes with a wide range of features to help you grow your business. 3. BigCommerce BigCommerce is another popular eCommerce platform that helps you build a professional online store. It's feature-rich, scalable, and comes with everything you need to launch and grow your business. These are just some of the best eCommerce tools that can help you boost your business growth. Choose the right platform and tools for your business, and you'll be well on your way to success.
jameshoward1203
1,223,805
Minor imperfections that shout ‘beginner code’
For about a year, I've been mentoring a few people who are just getting into programming. While...
0
2022-10-19T05:44:49
https://how-to.dev/minor-imperfections-that-shout-beginner-code
beginners
For about a year, I've been mentoring a few people who are just getting into programming. While reviewing their code, I have noticed a few issues that appear repetitively. Many of those issues are: * minor enough to look OK for beginners * annoying enough to be immediately noticed by more experienced developers Let’s go through these imperfections quickly so you can avoid them in your code! ## No new line at the end of the file There is a UNIX convention of adding new line characters at the end of each file. It started because command line tools like `cat` were displaying many files without putting any separators between them. Git identifies files by the hash of their content—even one character difference makes a file different for Git. Those two things combined cause Git and GitHub to point it out whenever there is a file missing a new line at the end of the file. Git command line: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8uf7u2818se2iy2stvq1.png) GitHub: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ow9z18bpixp1cn9qmo5.png) Many projects I’ve seen have a policy of requiring each text file to contain a new line character—following the UNIX convention. ### Solution There should be an option in your code editor to add those new line characters for you. Alternatively, you can check the support for [EditorConfig](https://editorconfig.org/) in your editor and set it up on the project level. ## Inconsistent code styling Please take a look at this code: ```JS if (a == 1){ b = 2; } else { b =3; } ``` I used to write like this when I was starting to program, but now I feel almost offended by the code style here. And I’m not alone: other [programmers](https://how-to.dev/what-tech-newbies-need-according-to-industry-insiders) point to consistency as an important part of coding as well. ### Solution You don’t have to: * perceive all the styling issues, nor * fix them manually, nor even * think about code style details. Rather, just get some popular and opinionated code style automation tool and outsource all your styling needs with that tool. For the frontend, you have Prettier, which I covered in [another article](https://how-to.dev/how-to-make-your-code-prettier). ## Commit messages I’m often sending the [commit message guidelines](https://www.freecodecamp.org/news/how-to-write-better-git-commit-messages/#5-steps-to-write-better-commit-messages) to anybody I start working with. Plenty of projects enforce those rules—or something even more elaborate. Why do I and other people care so much? * Some projects generate change logs automatically from commits. For example, angular uses [conventional-changelog](https://github.com/conventional-changelog/conventional-changelog). * Git history should provide a quick overview about what happened in the project—messy messages make it more difficult to digest. * `git blame` points to the commit that changed a given line the last time—a good commit message speeds up understanding what and why happened then. For a beginner working on a personal project, I would stick to: * using the verb in first person imperative in the present tense: `add index.html` instead of `added index.html` or `adding index.html`. * keeping capitalization consistent—either always starting with a lower- or uppercase letter. * not using a period at the end of the message. * staying below 50 characters in the commit message. A bit more is fine, but try to avoid going over 70. Most Git tooling doesn’t wrap messages, so long messages will be truncated or go outside of the screen. ## `!important` CSS When you use `!important` in your styles, you force a given CSS rule over any other. It’s a [code smell](https://en.wikipedia.org/wiki/Code_smell), or a way of achieving your goal when things have already started going bad in the project. When you use `!important`, you are removing a simple option to override the values with more specific selectors—a key feature of CSS. The only situation when you cannot avoid `!important` is when you override another rule that already uses it, and which comes from code you cannot control—for example, from third-party libraries. What should you use instead? Any way of making the selector more specific and therefore stronger. In the worst case, you can duplicate class name to make one of selectors stronger: ```CSS .side-bar.side-bar { color: green; } .content { color: blue; } ``` will make `<div class=”content side-bar”>test</div>` green. ## Folder structure and file names My expectations about codebase structure are that: * it should be clear what I will find inside any folder or file, * it should be obvious where the new code that I’m about to write should go, * it should be consistent, and * it should be rather simple. Examples that don’t meet those expectations: * ``` controllers/ some-controller.ts anotherController.ts ``` The above mixes kebab-case with camelCase in file names: it’s unclear how the new files should be named. * ``` admin/ some-class.ts classes/ another-class.ts views/ index.html ``` The snippet above mixes folders matching use cases (`admin/`) and folders matching file type (`classes/`). It’s unclear where the admin-related class belongs. * ``` very/ nested/ folder/ file.ts ``` And finally, the above is more nested than necessary. ## Summary Those few things look rather minor for anybody who’s just starting to program, but it’s common for experienced programmers to pay attention to and notice them. Getting that straight sooner than later is a good idea: it will make your code look more professional, and doing so will help reviewers to focus on more important things.
marcinwosinek
1,224,242
Why are web developers paid so much?
A few years ago, web development was seen as a mysterious tech role where ‘nerds’ wrote lines of unsolvable code in dark rooms. Fast forward to today, and web development makes you think of images of innovative professionals who’ve coded their way to the top of the tech industry.
0
2022-10-26T00:28:16
https://scrimba.com/articles/why-web-developers-get-paid-so-much/
webdev, beginners, career, salary
--- title: Why are web developers paid so much? published: true description: A few years ago, web development was seen as a mysterious tech role where ‘nerds’ wrote lines of unsolvable code in dark rooms. Fast forward to today, and web development makes you think of images of innovative professionals who’ve coded their way to the top of the tech industry. tags: #webdev #beginners #career #salary canonical_url: https://scrimba.com/articles/why-web-developers-get-paid-so-much/ cover_image: https://scrimba.com/articles/content/images/size/w2000/2022/09/Why-are-web-developers-paid-so-much_-4.png # Use a ratio of 100:42 for best results. --- A few years ago, web development was seen as a mysterious tech role where ‘nerds’ wrote lines of unsolvable code in dark rooms. Fast forward to today, and web development makes you think of images of innovative professionals who’ve coded their way to the top of the tech industry. ![From coding in a dark room, to being at the top of the tech industry](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/92ymozpdsmdfpoid8qwr.png) Okay, both of these *are* stereotypes — but web development is increasingly perceived as one of tech’s highest-earning roles, and many career-changers are drawn to the field because of its earning potential. So why are web developers paid so much? And can a web developer become a millionaire? In this article, we'll de-code (get it?) web developer salaries, and understand the drivers behind what makes web development such a lucrative career path. We’ll also equip you with some actionable tips to boost your web development salary, no matter what level you’re at. Let’s start with the basics: ## How much do web developers actually earn? With any career change, you’ll want to know that your new skills will pay the bills. Before embarking on a journey into web development, you might be wondering: Do web developers make good money? Naturally, what ‘good money’ looks like is subjective — and the caveat of an ‘average salary’ is that it doesn’t reflect the full spectrum of web developer salaries and experience levels. If you’re wondering if you can get paid six figures as a web developer, we’ve got good news: **You can!** There isn’t necessarily a cap on what most web developers can earn, as long as they continue growing and developing their skills. Nevertheless, it’s still important to get a sense of salary benchmarks. So let’s look at a snapshot of average web developer salaries in the UK (and beyond) with 2022 salary data pulled from [Glassdoor](http://glassdoor.com/). * Average junior web developer salary in the UK: **£25,955** * Average mid-level web developer salary in the UK: **£35,765** * Average senior web developer salary in the UK: **£49,334** **So, what about the average hourly wage for a freelance web developer in the UK?** According to [Expert Market](https://www.expertmarket.co.uk/web-design/how-much-do-freelance-web-designers-charge), freelance web developers can charge anything between £250 and £750+ per day based on experience — with expert developers charging much more. **Let’s look at how this data stacks up against the rest of the world:** ![Average web developer salary statistics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uu4031wgq8e7bc9ta6v3.png) * Average web developer salary in **Ireland**: €43,555 (£39,414) * Average web developer salary in **The USA**: US$73,816 (£68,931) * Average web developer salary in **Canada**: CA$64,009 (£44,011) * Average web developer salary in **Germany**: €52,236 (£47,269) * Average web developer salary in **France**: €40,000 (£36,197) * Average web developer salary in **Spain**: €24,775 (£22,419) * Average web developer salary in **South Africa**: ZAR 240,000 (£12,350) * Average web developer salary in **Australia**: A$78,000 (£47,333) * Average web developer salary in **The UAE**: AED 72,000 (£18,258) * Average web developer salary in **India**: ₹240,000 (£2,744) * Average web developer salary in **Singapore**: SGD 42,000 (£27,314) Again, this is just a snapshot. If you’re curious about what web developers earn in your country, we recommend checking out your local average web developer salaries using Glassdoor. ## Why are web developers paid so much? The short answer to this question is that web development is a complex and layered role, and developers work incredibly hard. The long answer is a little more well-rounded — so let’s take a look at five reasons why web developers are worth every penny. ### Demand Search the web for the most in-demand tech skills in 2022, and you’re almost guaranteed to find the word ‘developer’ on every single list. A quick [Indeed](http://indeed.com/) search brought up 6,654 UK based web developer jobs alone, but a more comprehensive search, including frontend and backend development jobs across [LinkedIn](http://linkedin.com/) and [Glassdoor](http://glassdoor.com/), will show infinitely more. The Covid-19 pandemic saw skyrocketing demand for developers as new emphasis was placed on digital experiences. Increasing demand meant a war for talent for talented devs, which led to higher salaries across the board. High demand gives web developers the upper hand when negotiating salaries, as there’s always an abundance of roles available. ### Skillset Web development is known as a super accessible career path. Absolutely anyone can learn to code — and for free, at that. But to become a legitimate, job-ready web developer takes a fair amount of hard work. Most web developers are well-versed in more than one coding language, and will use a variety of coding languages in their work to get the job done. Web development is also a highly technical role, which requires a good dose of self-discipline to truly master. To become a web developer, you’ll need to learn skills like testing and debugging, analysis, frontend and/or backend development, technical SEO, and even responsive web design. But, as we’ve learned, your hard work *will* pay off! ### Responsibilities Web developers do so much more than just build, develop, and deploy websites. With each digital experience they create, it’s up to the web developer to balance client or company needs with the experience of the end user. To manage this, web development roles include a degree of project management, presentation, and collaboration with designers and other stakeholders. Web developers are also continually optimizing their digital products; finding ways to improve how the website runs, and coming up with quick and efficient ways to fix bugs. This is highly technical and collaborative work which requires developers to solve complex issues on an ongoing basis. ### Adaptability In addition to being versatile, web developers also have to be incredibly adaptable. Technology — and the internet — is always changing fast. User needs are also continuously evolving, and web developers are part-responsible for meeting and anticipating those needs. As a result, a career in web development means **continuous learning**. Web developers have to be in the know of new developments, trends, tools, practices, and techniques. They’re also expected to regularly upskill and improve in order to stay abreast of the ever-changing technological landscape. For all these reasons (and more) the approach and mindset developers bring to tech teams is extremely valuable. ### Specializations Many web developers decide to specialize in a specific type of web development, which has an impact on their earning potential. For example, the average UK frontend developer salary is slightly higher than a general web developer salary at **£44,833**. Backend developer salaries are even higher, averaging at **£50,358** in the UK. Many have also become specialists in specific areas of web development, like JavaScript frameworks or DevOps. This makes their work more valuable, which is reflected in their salaries. ## Who earns more: Web developers or software engineers? Maybe you’re interested in learning to code, but can’t seem to wrap your head around the seemingly endless list of developer and software engineer job titles. Web development, software development, and software engineering are often bundled up as part of the same discipline. In some cases, the terms are even used interchangeably. Technically, there is a lot of overlap. Both web developers and software engineers rely on similar coding languages and technologies to build, design, and deploy digital experiences. The difference lies more in the nature of the work itself. Web developers work primarily on web experiences, web applications, and desktop programs. On the other hand, software engineers tend to work on a wider variety of IT programs and computer systems, including hardware. Because of this, software engineering is generally seen as more technical — which is reflected in their take-home. In the UK, the average software engineer salary sits at **£48,891** (13k more than web developers). While this might seem like compelling data, there’s a payoff on both sides. One reason so many opt for web development is because it’s a slightly less stressful career path, and generally easier to get started with. Plenty of challenges to keep you on your toes, but [the skills are more accessible for complete beginners](https://scrimba.com/articles/do-you-need-a-computer-science-degree-to-be-a-web-developer/). Web development can also be a solid stepping-stone into more technical and specialized programming career paths. As there are so many shared skills between the two disciplines, there’s no reason a web developer couldn’t become a software engineer (and vice versa). ## How do I increase my salary as a web developer? As with most entry-level positions, a junior web developer role won’t see you rolling in cash. With experience, your salary will naturally increase — but there are also a few additional ways you can maximize your earning potential as a web developer: ### Upskill One of the best ways to make yourself more profitable as a web developer is to update your skills arsenal. This could be something straightforward, like learning a new coding language. But if you really wanted to push the boat out, you could [learn web design](https://scrimba.com/learn/design). Web developers who can both build — and design — a website are indispensable to tech teams and clients, as it means they can hire one person instead of two (and avoid the hassle of friction-filled handoffs). Demand for developers who also design is high, which puts them in a good position to earn more. As in, *a lot* more. If you’re not able to take on some more responsibility in your current role, freelancing is a great way to gain new skills and build up your portfolio. ### Create a personal brand If you thought you had to be an influencer to create a personal brand, you’re wrong. More and more tech professionals are building personal brands that attract employers, grow followers, and boost credibility. You also don’t have to be an industry leader or senior developer to create a personal brand: Personal brands are all about storytelling, and everyone has a story to tell — even junior developers. If you want to get your name out there to higher-paying employers or clients, and showcase that you’re up to date with the latest tools and techniques, this is a vital step. ### Build a niche If you’re new to web development, your focus will be about getting as much industry exposure as possible. But the further you get along your career, the pickier you can be about the industries you work in. As a follow-on from building a personal brand, you might want to consider choosing a niche that makes you an ideal candidate for specific web development roles. This could be a specific industry you’ve really gotten to know, an approach you’ve mastered, or a web development skill you feel you’ve really nailed. Having a specific niche will give you a competitive edge on the tech job market, and will likely mean clients and employers within that niche will pay top dollar to secure you. ### Switch companies Sometimes the best solution is the most simple one, and it might be that your company simply isn’t paying you enough — especially if there aren’t any raises or promotions on the horizon. Continuous learning and professional development are important pillars of the development community, so it’s also important to recognise when you’re stagnating in your role — and the projects you’re working on aren’t challenging you. If you follow some of the above steps, you’ll be in a much better position to negotiate a higher salary in your new role. ### Increase your visibility The web development community is big, supportive, and vocal. If you’re looking for ways to get your name out there, start there. There’s an abundance of ways to network and get more involved; including attending webinars and meet-ups, answering questions on coding forums like [Discord](https://discord.com/), going to Hackathons — the list is endless. Not only will you meet gain fresh perspectives and learn new skills, you’ll also start to be seen as an industry expert — which will help legitimize your quest for a higher salary. ## The verdict As you segue into web development, it’s important to **manage your expectations**. Yes, web developers can earn six figures, and even become millionaires. But, as is the case with any new career, you have to start somewhere. We’re *definitely* not going to say ‘money isn’t everything’ because, as exciting as it is, no one would work as a web developer for free. But there are other important factors to take into account in your decision to pursue web development; like whether or not web development is future-proof (it is), if web developers are in high demand (they are), and whether web development would be a fulfilling career path for you (if you’re reading this blog post, we’d wager that’s a yes!). Web development has a lot to offer. If you’re wondering what to get started with, we recommend a [free beginners course to learn JavaScript](https://scrimba.com/learn/learnjavascript).
bookercodes
1,224,274
The Latest GNOME 43, “Guadalajara”, Released See What’s New
The GNOME project has announced the immediate availability and release of version 43 after putting in...
0
2022-11-14T16:16:00
https://fossnaija.com/gnome-43-released/?utm_source=rss&utm_medium=rss&utm_campaign=gnome-43-released
fossstories, fossnaijanews
--- title: The Latest GNOME 43, “Guadalajara”, Released See What’s New published: true date: 2022-10-19 11:15:17 UTC tags: FOSSStories,FossNaijaNews canonical_url: https://fossnaija.com/gnome-43-released/?utm_source=rss&utm_medium=rss&utm_campaign=gnome-43-released --- The [GNOME project](https://www.gnome.org/about-us/) has announced the immediate availability and release of version 43 after putting in a lot of effort over the past six months. The most recent version of GNOME has a plethora of enhancements, some of which include a revamped Files app, an updated fast settings menu, and improved cooperation with hardware security. The migration of GNOME apps from GTK 3 to GTK 4 is something that will be continued with GNOME 43, in addition to including a wide variety of additional minor improvements. In honour of the hard work put in by the organisers of GUADEC 2022, the next version of GNOME has been given the code name “Guadalajara.” ## What are new? The menu that displays the condition of the system in GNOME 43 has been modernised, and as a consequence, making common modifications may now be done in a shorter amount of time. The need to delve deeply into menus to make adjustments to settings has been eliminated, and such adjustments can now be made with the touch of a button. The new style not only makes it easy to receive an overview of the current configuration of personal settings, but it also makes it possible to acquire this information with just a glance. [![](https://i0.wp.com/fossnaija.com/wp-content/uploads/2022/10/nautilus-screenshot.webp?resize=665%2C455&ssl=1)](https://i0.wp.com/fossnaija.com/wp-content/uploads/2022/10/nautilus-screenshot.webp?ssl=1) _Nautilus File Manager (GNOME)._ In addition to simplifying the usage of the currently available options, its new settings interface brings a number of noteworthy new features, including: - The menus now include a choice for the user interface style, allowing users to choose between a bright and dark design. This is a new addition to the menu. Until recently, the only app that allowed access to this feature has been the Settings application. - The built-in screenshot feature, which was launched in GNOME 42 has been expanded upon with the addition of a novel button for taking screenshots that can be found in this release. - Users are now able to toggle between various sound sources that are available using the menu. This eliminates the necessity to locate and delve into the maze of the Settings application to make the necessary adjustments. - Whenever the PC’s VPN is off, hitting the VPN button would begin the reconnecting process to the most recent network it was using. - And so on… ## Getting and Installing GNOME 43 The GNOME software is very popular[free software](https://fossnaija.com/free-vs-non-free-softwares-blurred-edges/); this means that all of our source code is publicly downloadable and may be freely updated and shared as long as it adheres to the licences that govern it. If you want to install it, you should wait until your [Linux](https://fossnaija.com/?s=linux) vendor or [distribution (distro)](https://dev.to/xeroxism/5-top-privacy-and-security-linux-distributions-2n92) releases the official packages. Some of the most popular distributions have already developed releases that integrate the next GNOME release, and they will shortly make the new GNOME 43 accessible or available to their users. You may also experiment further with the GNOME OS image by using the Boxes app to run it in a virtual machine environment. **Happy Linux’NG!** The post [The Latest GNOME 43, “Guadalajara”, Released See What’s New](https://fossnaija.com/gnome-43-released/) appeared first on [Foss Naija](https://fossnaija.com).
xeroxism
1,224,539
IBM zDay 2022 Recap: Speaking at zDay for The First Time | Optimizing Sustainability with LinuxOne
IBM zDay is a free one-day virtual conference event hosted by IBM about all things IBM zSystems....
0
2022-10-19T20:06:31
https://community.ibm.com/community/user/ibmz-and-linuxone/blogs/muhammad-hannan-khan/2022/10/19/ibm-zday-2022-recap-speaking-at-zday-for-the-first?CommunityKey=9476eac0-4605-4c10-906d-01a95923ae0b
ibm, linux, linuxone, sustainability
IBM zDay is a free one-day virtual conference event hosted by IBM about all things IBM zSystems. Global thought leaders come together to highlight industry trends, Innovations in AI, Quantum Computing, Mainframes, and a lot more! I had the pleasure of being invited to speak at zDay 2022 and open for Dr. Fan Jing Meng from IBM China, conducting a demonstration on the new IBM LinuxOne Emperor 4 and it’s impact on energy consumption. ![Speaker Lineup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbj41kzif9a8hqe8pj7x.PNG) **Speaker Lineup** zDay 2022 boasted a speaker lineup of some of the biggest names in Tech and Enterprise Computing such as Linus Sebastian from Linus Tech Tips, Ross Mauri—General Manager at IBM Systems, John Mertic—Program Director at The Linux Foundation, Meredith Stowell—VP at IBM Ecosystems, and many other industry leaders. ![Team Intro](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j8ssk5rgo3fpsx2ahdi6.PNG) **Optimizing Sustainability** My session focused on the effects of big data on the environment and what can be done for the “greening” of the IT sector. According to a report by IBM—titled “IT sustainability beyond the data center, Decarbonizing with hybrid cloud”—Collectively, data centers around the world consume 200 to 250 terawatt-hours (TWh) of electricity, according to the International Energy Agency (IEA). That’s roughly 1% of global electricity demand and approximately 0.3% of all global carbon emissions.9 Demand for data centers and network services will only continue to grow in the future, consuming even more electricity and producing even more carbon. Some estimates suggest that there has been a 43% absolute increase in the power capacity demand by data center operators between 2018 and 2021, and that the global data center market will grow by more than 30% between 2021 and 2027.10 Therefore, action to support further efficiency improvements, lower energy consumption, and reduced carbon impact from data centers is critical. **IBM Z Linux One** ![LinuxONE Emperor 4](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9s9a97o59u49aj7sa0c.png) This is where IBM’s new LinuxOne Emperor 4 system comes in. It helps organizations that care about achieving sustainability goals reduce energy costs and carbon footprint with a secure high-performance server platform for data-intensive workloads. To learn more about the IBM LinuxONE Emperor 4, head over to the [website](https://www.ibm.com/products/linuxone-emperor-4) and read more about how it can benefit your organization. ![LinuxOne Stats](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5vt81cuii2l6bcxr2f3l.PNG) If you want to watch the entire demo and all of the other sessions from zDay 2022. Head over to the event website to get free access to the replays [here.](https://ibmzday.bemyapp.com/2022)
hannankhan
1,224,788
How to keep the webpage CSS styles only up to certain device width or maximum width?
Originally posted here! To keep the webpage CSS styles only up to certain device width or maximum...
0
2022-10-20T01:14:35
https://melvingeorge.me/blog/keep-webpage-css-styles-only-upto-certain-device-width-or-maximum-width
css
--- title: How to keep the webpage CSS styles only up to certain device width or maximum width? published: true tags: CSS date: Thu Oct 20 2022 06:44:35 GMT+0530 (India Standard Time) canonical_url: https://melvingeorge.me/blog/keep-webpage-css-styles-only-upto-certain-device-width-or-maximum-width cover_image: https://melvingeorge.me/_next/static/images/main-c398fcbc03e4b0d638dd7eb9165293bb.jpg --- [Originally posted here!](https://melvingeorge.me/blog/keep-webpage-css-styles-only-upto-certain-device-width-or-maximum-width) To keep the webpage CSS styles only up to certain device width or maximum width, you can use the `@media` media query syntax followed by `()` symbols (opening and closing brackets). Inside the brackets, you can write the keyword `max-width` followed by the `:` symbol (colon) and then by writing the width with its unit. The CSS styles you write inside this CSS block will be displayed till the device or the window width you provided. ### TL;DR ```html <html> <body> <p>Hello World!</p> </body> <!-- CSS styles --> <!-- Using the `@media` and the `max-width` syntax we can define the CSS styles to be triggered till the window or the device width reaches `1000px`. --> <style> body { background-color: black; } p { color: white; } @media (max-width: 1000px) { body { background-color: white; } p { color: black; } } </style> </html> ``` For example, let's say we have a webpage where the background color of the `body` is black and the `paragraph` text is white by default. It may look like this, ```html <html> <body> <p>Hello World!</p> </body> <!-- CSS styles --> <style> body { background-color: black; } p { color: white; } </style> </html> ``` The output will look like this, ![webpage with black background color and text in white color](https://melvingeorge.me/_next/static/images/default-webpage-ac8bc6ccaa13722f127841659b20ae4f.png) We aim to change this webpage's `body` background color to `white` and paragraph text color to `black` till the window or the device width is at `1000px`. To do that we can use the `@media` media query syntax followed by the `()` brackets symbol. Inside the brackets, we can write the `(max-width: 1000px)` code to define that we need to trigger the CSS styles till the window width reaches `1000px` or the maximum width of the window is at the `1000px` mark. It can be done like this, ```html <html> <body> <p>Hello World!</p> </body> <!-- CSS styles --> <!-- Using the `@media` and the `max-width` syntax we can define the CSS styles to be triggered till the window or the device width reaches `1000px`. --> <style> body { background-color: black; } p { color: white; } @media (max-width: 1000px) { body { background-color: white; } p { color: black; } } </style> </html> ``` The visual representation of CSS styles getting applied is shown below, ![webpage background color is white and text color is black till widow width reaches 1000px](https://melvingeorge.mewebpage-change-styles) See the above code live in [codesandbox](https://codesandbox.io/s/keep-webpage-css-styles-only-upto-certain-device-width-or-maximum-width-o5mjsh?file=/index.html). That's all 😃! ### Feel free to share if you found this useful 😃. ---
melvin2016
1,224,908
Great Resignations, employee attrition analysis using Machine Learning Algorithms
Abstract: In recent days there were high number of attrition's across all the industries over the...
0
2022-10-20T06:27:40
https://dev.to/gouse_bme/great-resignations-employee-attrition-analysis-using-machine-learning-algorithms-cb2
employee, prediction, attrition, machinelearning
Abstract: In recent days there were high number of attrition's across all the industries over the globe. In this article we tried analyzing some of the most influenced reasons/factors using the ML algorithms. Attrition is defined an employee leaves the organization for various reasons. The number of employees that leave an organization versus average number of employees in the organization over a period of time is known as Attrition rate. If Attrition Rate is higher than usual, it become a matter of concern. If attrition rate is high, there will be a huge loss of talent for the company. So, it is always suggested to predict the employee attrition forehand [2,3]. If the company has information on employees those who may leave the organization, company can take some preventive steps to contain the attrition. In this analysis we will explore the important factors/ attributes that are influencing employee attrition. We had also explored how each factor is contributing to the attrition. We had applied machine learning algorithms such as Classification prediction, data pre-processing techniques like Data Extraction ,Feature Engineering and Data sampling. Hence Classification Predictive models are implemented in companies to keep track of attrition possibilities, in turn to avoid or mitigate the employee attrition. Keywords : Attrition, Classification, Perdition Future Extraction ,Future Engineering 1. Statement of problem and objective: Attrition is a big problem in many organizations [4]. In any organization, small attrition rate is common. But, if it is more, then it becomes a matter of concern and the reasons for high attrition rates are to be investigated, so that the company can take required measures to reduce the attrition rate in future [5,6,7]. If more number of employees leave an organization, there will be a huge production loss, economic loss, loss of clients and loss company image. It affects the organization in many ways. Hence, it’s required to investigate the reasons behind high attrition rate and [12–14] it’s being asked to build a model to predict attrition of the employees. 2. Methodologies for Analysis: As per the objective of Research Question, we adopted chi2 test statistic and evaluated the predictions of employee attrition. The analysis are carried using different algorithms like Logistic Regression, Linear Discriminate Analysis, K Nearest Neighbors, Classification and Regression Tree, Gaussian Naïve Bayes, Support Vector Machine. We used chi2 test for finding the features that are affecting the Attrition of an employee[15]. At the beginning stage, we applied data validation techniques and encoding techniques to convert Categorical Variables to Numerical Variables. Based on the sample experiment data, 2.1 Data Acquisition : Dataset Description The HRM dataset used in this research work is distributed by IBM Analytics [32]. This dataset contains 35 features relating to 1500 observations and refers to U.S. data. All features are related to the employees’ working life and personal characteristics (see Table 1). Table 1. Dataset features. Age, Monthly income Attrition(predicted), Monthly rate, Business travel, Number of companies worked, Daily rate, Over18, Department, Overtime, Distance from home, Percent salary hike, Education, Performance rating, [33]Education field, Relationship satisfaction[1,7], Employee count, Standard hours, Employee number, Stock option level, Environment satisfaction, Total working years, Gender, Training times last year, Hourly rate, Work-life balance, Job involvement, Years with company, Job level, Years in current role, Job role, Years since last promotion, Job satisfaction, Years with current manager, Marital status[34–38] Attrition: A high attrition rate triggers high recruitment cost for resourcing new employees. So, it is always helpful for the companies to know the influencing factors of employee attrition. Here, chi2 test statistic is used for finding the strong relation or dependency of attrition variable[22] on input features of the given data. 2.2 Feature Engineering Techniques for Character Data: When there are more predictors or features, the degree of association between predictor or input feature and the target feature or outcome can be measured with statistics such as Chi2.The features with more chi2 test statistic value can be the best features to be considered for modelling. The p value less than 0.01 are considered to validate the Chi2 score values. These are the nine features that are having high chi2 values and p values less than 0.01. DistanceFromHome’,’JobLevel’,’MaritalStatus’,’OverTime’,’StockOptionLevel’,’TotalWorkingYears’, ‘YearsAtCompany’, ‘YearsInCurrentRole’, ‘YearsWithCurrManager’ are the nine affecting features of attrition. i. Distance From Home: This is one the input features which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 59.49 which is a huge score that represents much dependency of target variable on this input feature. The barplot in [Fig01] shows the affect of Distance From Home on Attrition. It shows that ,those who are 2 kms away(near to the office) from office are more likely to leave the company. Those who are much far from the company are not willing to leave the company. The pie chart in [Fig02] shows that among all the employees, who would like to leave the company, more people are 2kms away from the office. Data shows that 11.81% of employees who left the company are 2kms away from the office and 10.97% of employees who left the organization are 1km away from their office. ii. Job Level: Job Level is another input feature which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 21.74 which is a good score that represents much dependency of target variable on this input feature. There are five job levels in the data. Among all, only few are influencing attrition. The barplot in [Fig03] shows the affect of Job Level on Attrition. It shows that those who are at Job Level 1 are more likely to leave the company. Those who are at Job Level 4 and 5 are not willing to leave the company. The pie chart in [Fig04] shows that among all the employees, who would like to leave the company, more people are at Job Level 1. Data shows, 60.34% of employees who left the company are at Job Level 1, followed by 21.94% of employees who left the company are at Job Level 2. iii. Marital Status: Marital Status is another input feature which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 12.93 which is a good score that represents good dependency of target variable on this input feature. The bar plot in [Fig05] shows the affect of Marital Status on Attrition. It shows that those who are Single, are more likely to leave the company. Those who are Divorced are less willing to leave the company. The pie chart in [Fig06] shows that among all the employees, who would like to leave the company, more people are Single. Data shows, 50.63% of employees who left the company are Single, followed by 35.44% of employees who left the company are Married, and remaining 13.92% employees who left the company are Divorced. iv. Over Time: Over Time is another input feature which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 56.92 which is a huge score that represents much dependency of target variable on this input feature. The bar plot in [Fig07] shows the affect of Over Time on Attrition. It shows that those who are doing over time, are more likely to leave the company, when compared to those who are not working over time. The pie chart in [Fig08] shows that among all the employees, who would like to leave the company, more people are those who are working over time. Data shows, 53.59% of employees who left the company are doing over time. v. Stock Option Level: Stock Option Level is another input feature which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 17.31 which is a good score that represents good dependency of target variable on this input feature. The bar plot in [Fig09] shows the effect of Stock Option Level on Attrition. It shows that those who are having Stock Option Level 0, are more likely to leave the company, when compared to Stock Option Levels 1, 2 and 3.The pie chart in [Fig10] shows that among all the employees, who would like to leave the company, most people those who are at Stock Option Level 0. Data shows, 64.98% of employees who left the company are having Stock Option Level 0, followed by 23.63% of employees having Stock Option Level 1. vi. Total Working Years: Total Working Years is another input feature which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 219.33 which is a big score that represents high dependency of target variable on this input feature. The bar plot in [Fig11] shows the effect of Total Working Years on Attrition. It shows that those who are having Total Working Experience of 1 year, are more likely to leave the company, and those who are having Total Working Experience of more than 11 years are less likely to leave the company. The pie chart in [Fig12] shows that among all the employees, who would like to leave the company, more people are those who are having 1 year of Total Working experience. Data shows, 16.88% of employees who left the company are having 1 year of Total Working Experience, followed by 10.55% of employees with Total Working Experience of 10 years. vii. Years At Company: Years At Company is another input feature which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 145.78 which is a big score that represents high dependency of target variable on this input feature. The barplot in [Fig13] shows the effect of Years At Company on Attrition. It shows that those who are having 1 Year of experience At Company, are more likely to leave the company, and those who are in the company for more than 10 Years are less likely to leave the company. The pie chart in [Fig14] shows that among all the employees, who would likely to leave the company, more people are those who are having 1 year of Working experience At Company. Data shows, 24.89% of employees who left the company are having 1 year of Working Experience At the Company, followed by 11.39% of employees with 2 years of experience At Company. viii. Years In Current Role: Years In Current Role is another input feature which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 103.62 which is a big score that represents high dependency of target variable on this input feature. The barplot in [Fig15] shows the effect of Years In Current Role on Attrition. It shows that those who are having 0 Years of experience or less than 1 year of experience In Current Role, are more likely to leave the company, and those who are having more than 10 Years of experience in current role are less likely to leave the company. The pie chart in [Fig16] shows that among all the employees, who would like to leave the company, more people are those who are having less than 1 year of Working experience in current role. Data shows, 30.80% of employees who left the company are having less than 1 year of Working Experience in current role, followed by 28.69% of employees with 2 years of experience in current role. ix. Years With Current Manager: Years With Current Manager is another input feature which is influencing the attrition according to chi2 test result. It’s Chi2 test value with Attrition variable is 120.49 which is a big score that represents high dependency of target variable on this input feature. The barplot in [Fig17] shows the effect of Years With Current Manager on Attrition. It shows that those who are having less than 1 Year With Current Manager, are more likely to leave the company, and those who are having more than 10 Years With Current Manager are less likely to leave the company. The pie chart in [Fig18] shows that among all the employees, who would like to leave the company, more people are those who are having less than 1 year with current manager. Data shows, 35.86% of employees who left the company are having less than 1 year of association with the current manager, followed by 21.10% of employees with 2 years with current manager. 2.2 Feature Engineering for Numerical Data : The available numerical variables for modelling are “Age”, “DailyRate”, “HourlyRate”, “MonthlyIncome”, “MonthlyRate”. Often, it required to check the correlation among all the numerical features that are present in the dataset. If there are any highly correlated numerical features present in the data, it is required to remove redundant features. It’s required because, most of the times, these redundant features reduce the performance of machine learning models. And,the [table01] shows the correlation among all numerical features i. Age:“Age” is one of the numerical variables that is useful in predicting the Attrition of employee. It has exhibited gaussian distribution with skewness of 0.413 and kurtosis of -0.404, which are valid scores. And, data shows that many employees left the organization at an age of 29 and 31 years. ii. Daily Rate:“Daily Rate” is another numerical variables that is useful in predicting the Attrition of employee. It has exhibited gaussian distribution with skewness of -0.0035 and kurtosis of -1.203, which are valid scores. iii. HourlyRate:“Hourly Rate” one more numerical variable that is useful in predicting the Attrition of employee. It has exhibited Gaussian distribution with skewness of -0.0323 and kurtosis of -1.196, which are valid scores. And, data shows that more number of employees with an Hourly Rate of 66 left the organization. iv. Monthly Income:“Monthly Income” is one of the numerical variables that is useful in predicting the Attrition of employee. It has not exhibited Gaussian distribution and it’s skewness is 1.369, which is not acceptable and kurtosis is 1.005. Hence logarithmic transformation is done on this variable to make it Gaussian distributed. Now, the skewness is 0.286 and kurtosis is -0.697, which are acceptable scores. v. MonthlyRate:“Monthly Rate” is one of the numerical variables that is useful in predicting the Attrition of employee. It has exhibited Gaussian distribution with skewness of 0.0185 and kurtosis of -1.214, which are valid scores. 2.3 Machine Learning /AI Algorithms Description: i. Logistic Regression :Logistic regression assumes a Gaussian distribution for the numeric input variables and can model binary classification problems. ii. Linear Discriminant Analysis: Linear Discriminant Analysis or LDA is a statistical technique for binary and multiclass classification. It too assumes a Gaussian distribution for the numerical input variables. iii. K Nearest Neighbors: The k-Nearest Neighbors algorithm (or KNN) uses a distance metric like ecludian distance to find the k most nearest instances in the training data for a new instance and takes the mean outcome of the neighbors as the prediction. iv. Naive Bayes: Naive Bayes calculates the probability of each class and the conditional probability of each class given each input value. These probabilities are estimated for new data and multiplied together, assuming that they are all independent (a simple or naive assumption). v. CART: Classification and Regression Trees construct a binary tree from the training data. Split points are chosen greedily by evaluating each attribute and each value of each attribute in the training data in order to minimize a cost function. vi. SVM: Support Vector Machines (or SVM) seek a line that best separates two classes. Those data instances that are closest to the line that best separates the classes are called support vectors 2.5 Evaluating Models: In order to avoid data leakage, first the whole dataset is divided into training and validating datasets. Pipeline process is used to automate the scaling and evaluation of algorithms. Accuracy is chosen as the evaluation metric. KFold cross validation is used for resampling and evaluation of different algorithms. Min Max Scalar is used for scaling the data and standardizing it. Logistic Regression, Linear Discriminant Analysis, K Nearest Neighbors, Classification and Regression Tree, Gaussian Naïve Bayes, Support Vector Machine algorithms are used. The below scores are mean and standard deviation values of accuracy scores over 10 folds of KFold cross validation. i. Logistic Regression has given mean accuracy of 0.851 and standard deviation of 0.027 on the training data of this dataset. ii. Linear Discriminant Analysis has given mean accuracy of 0.845 and standard deviation of 0.024 on the training data of this dataset. iii. K Nearest Neighbors has given mean accuracy of 0.832 and standard deviation of 0.026 on the training data of this dataset. iv. Decision Tree Classifier or CART has given mean accuracy of 0.772 and standard deviation of 0.034 on the training data of this dataset. v. Naive Bayes has given mean accuracy of 0.768 and standard deviation of 0.037 on the training data of this dataset. vi. Support Vector Machine has given mean accuracy of 0.849 and standard deviation of 0.030 on the training data of this dataset. 2.6 Finalizing Model: From the above analysis and stats resulting from the Machine Learning Models, the score card says the best model is Logistic Regression which is having the highest accuracy amongst other models. Hence, it is selected as final model and its accuracy is checked on unseen or validating dataset. It has given an accuracy of 0.8639, which is a good score on validation data. The test data of this dataset is used for further future predictions. 3 Findings: Out of our research on attrition of employees from an organization, significance influencing factors are extracted through future extraction and future engineering techniques such as and it can be concluded that ‘DistanceFromHome’, ‘JobLevel’, ‘MaritalStatus’, ‘OverTime’, ‘StockOptionLevel’,’ ‘TotalWorkingYears’, ‘YearsAtCompany’, ‘YearsInCurrentRole’, ‘YearsWithCurrManager’ , ”Age”, “DailyRate”, “HourlyRate”, “MonthlyIncome”, “MonthlyRate” are the effecting features of attrition. And, LogisticRegression Algorithm is working better on this binary classification prediction problem with an accuracy of about 85%. 4 Conclusion: High Attrition Rate is a problem that is to be carefully examined and to be investigated to find out the reasons behind it, in order to avoid major losses for the organization. Hence, in our research, we found out the major factors that are acting as driving forces of employee attrition, and accordingly we have developed models to predict the possible employee attrition. This might help organization to take required steps to avoid the losses caused by attrition, or else, companies can imply preventive measures to retain those employees who might leave the organization. Above factors are the most influencing in employee attrition. Figures referred: ![Fig-01](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjxupq1kxrv631au7cvl.PNG) ![Fig-02](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mz0t4xe03d8mo8uf49ap.PNG) ![Fig-03](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8cqzk3p81vz63hp0yg6e.PNG) ![Fig-04](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m0gu9rcbiavjbd6ejhl4.PNG) ![Fig-05](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ppynsq9qx9ubzu2d7ggl.PNG) ![Fig-06](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u8ipwb8pdyptmoyc5dd7.PNG) ![Fig-07](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oats8oiryxngieuf66w9.PNG) References: 1. Cockburn, I.; Henderson, R.; Stern, S. The Impact of Artificial Intelligence on Innovation. In The Economics of Artificial Intelligence: An Agenda; University of Chicago Press: Chicago, IL, USA, 2019; pp. 115–146. 2. Jarrahi, M. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decisionmaking. Bus. Horiz. 2018, 61, 577–586. [CrossRef] 3. Yanqing, D.; Edwards, J.; Dwivedi, Y. Artificial intelligence for decision making in the era of Big Data. Int. J.Inf. Manag. 2019, 48, 63–71. 4. Paschek, D.; Luminosu, C.; Dra, A. Automated business process management-in times of digital transformation using machine learning or artificial intelligence. In MATEC Web of Conferences; EDP Sciences:Les Ulis, France, 2017; Volume 121. 5. Varian, H. Artificial Intelligence, Economics, and Industrial Organization; National Bureau of Economic Research: Cambridge, MA, USA, 2018. 6. Vardarlier, P.; Zafer, C. Use of Artificial Intelligence as Business Strategy in Recruitment Process and Social Perspective. In Digital Business Strategies in Blockchain Ecosystems; Springer: Berlin/Heidelberg, Germany, 2019; pp. 355–373. 7. Gupta, P.; Fernandes, S.; Manish, J. Automation in Recruitment: A New Frontier. J. Inf. Technol. Teach. Cases2018, 8, 118–125. [CrossRef] 8. Geetha, R.; Bhanu Sree Reddy, D. Recruitment through artificial intelligence: A conceptual study. Int. J. Mech.Eng. Technol. 2018, 9, 63–70. 9. Syam, N.; Sharma, A. Waiting for a sales renaissance in the fourth industrial revolution: Machine learning and artificial intelligence in sales research and practice. Ind. Mark. Manag. 2018, 69, 135–146. [CrossRef] 10. Mishra, S.; Lama, D.; Pal, Y. Human Resource Predictive Analytics (HRPA) For HR Management in Organizations. Int. J. Sci. Technol. Res. 2016, 5, 33–35. 11. Jain, N.; Maitri. Big Data and Predictive Analytics: A Facilitator for Talent Management. In Data Science Landscape; Springer: Singapore, 2018; pp. 199–204. 12. Boushey, H.; Glynn, S.J. There Are Significant Business Costs to Replacing Employees. Cent. Am. Prog.2012, 16, 1–9. 13. Martin, L. How to retain motivated employees in their jobs? Econ. Ind. Democr. 2018, 34, 25–41. [CrossRef] 14. involvement management and organizational performance: The mediating roles of job satisfaction and wellbeing. Hum. Relat. 2012, 65, 419–446. [CrossRef] 15. Zelenski, J.M.; Murphy, S.A.; Jenkins, D.A. The happy-productive worker thesis revisited. J. Happiness Stud.2008, 9, 521–537. [CrossRef] 16. Clark, A.E. What really matters in a job? Hedonic measurement using quit data. Labour Econ. 2001, 8, 223–242.[CrossRef] 17. Clark, A.E.; Georgellis, Y.; Sanfey, P. Job satisfaction, wage changes, and quits: vidence from Germany.Res. Labor Econ. 1998, 17, 95–121.Computers 2020, 9, 86 17 of 17 18. Delfgaauw, J. The effect of job satisfaction on job search: Not just whether, but also where. Labour Econ.2007, 14, 299–317. [CrossRef] 19. Green, F. Well-being, job satisfaction and labour mobility. Labour Econ. 2010, 17, 897–903. [CrossRef] 20. Kristensen, N.; Westergaard-Nielsen, N. Job satisfaction and quits — which job characteristics matters most?Dan. Econ. J. 2006, 144, 230–249. 21. Marchington, M.; Wilkinson, A.; Donnelly, R.; Kynighou, A. Human Resource Management at Work; Kogan PagePublishers: London, UK, 2016. 22. Van Reenen, J. Human resource management and productivity. In Handbook of Labor Economics; Elsevier:Amsterdam, The Netherlands, 2011. 23. Deepak, K.D.; Guthrie, J.; Wright, P. Human Resource Management and Labor Productivity: Does IndustryMatter? Acad. Manag. J. 2005, 48, 135–145. 24. Gordini, N.; Veglio, V. Customers churn prediction and marketing retention strategies. An application ofsupport vector machines based on the AUC parameter-selection technique in B2B e-commerce industry.Ind. Mark. Manag. 2016, 62, 100–107. [CrossRef] 25. Keramati, A.; Jafari-Marandi, R.; Aliannejadi, M.; Ahmadian, I.; Mozaffari, M.; Abbasi, U. Improved churnprediction in telecommunication industry using data mining techniques. Appl. Soft Comput. 2014, 24, 994–1012.[CrossRef] 26. Alao, D.; Adeyemo, A. Analyzing employee attrition using decision tree algorithms. Comput. Inf. Syst. Dev.Inf. Allied Res. J. 2013, 4, 17–28. 27. Nagadevara, V. Early Prediction of Employee Attrition in Software Companies-Application of Data MiningTechniques. Res. Pract. Hum. Resour. Manag. 2008, 16, 28. Rombaut, E.; Guerry, M.A. Predicting voluntary turnover through Human Resources database analysis.Manag. Res. Rev. 2018, 41, 96–112. [CrossRef] 29. Usha, P.; Balaji, N. Analysing Employee attrition using machine learning. Karpagam J. Comput. Sci. 2019, 13,277–282. 30. Ponnuru, S.; Merugumala, G.; Padigala, S.; Vanga, R.; Kantapalli, B. Employee Attrition Prediction usingLogistic Regression. Int. J. Res. Appl. Sci. Eng. Technol. 2020, 8, 2871–2875. [CrossRef] 31. Microsoft Docs: Team Data Science Process. 32. IBM HR Analytics Employee. 33. CrowdFlower. Data Science Report. 2016. 34. Antecol, H.; Cobb-Clark, D. Racial harassment, job satisfaction, and intentions to remain in the military.J. Popul. Econ. 2009, 22, 713–738. [CrossRef] 35. Böckerman, P.; Ilmakunnas, P. Job disamenities, job satisfaction, quit intentions, and actual separations:Putting the pieces together. Ind. Relations 2009, 48, 73–96. [CrossRef] 36. Theodossiou, I.; Zangelidis, A. Should I stay or should I go? The effect of gender, education andunemployment on labour market transitions. Labour Econ. 2009, 16, 566–577. [CrossRef] 37. Böckerman, P.; Ilmakunnas, P.; Jokisaari, M.; Vuori, J. Who stays unwillingly in a job? A study based on arepresentative random sample of employees. Econ. Ind. Democr. 2013, 34, 25–41. [CrossRef] 38. Griffeth, R.W.; Hom, P.W.; Gaertner, S. A meta-analysis of antecedents and correlates of employee turnover:Update, moderator tests, and research implications for the next millennium. J. Manag. 2000, 26, 463–488.[CrossRef]
gouse_bme
1,225,105
Key Differences Between iOS and Android App Development
Building your mobile up is something that requires you to answer a lot of questions and make certain...
0
2022-10-20T11:14:56
https://dev.to/jasonlee/key-differences-between-ios-and-android-app-development-48de
android, ios, app, programming
Building your mobile up is something that requires you to answer a lot of questions and make certain decisions. The most important thing is to know what platform you will choose for the app you’ve been building. Of course, this is much easier to say than to make a proper decision. There are two main platforms to choose from, Android and iOS. You need to be aware of all the features each platform can offer you with. Naturally, it is not impossible to decide what you want to build for both of these. However, we would recommend you choose one of them first, and then do another one if the need for it arises. The main thing you should pay attention to is knowing all the differences between these two. Both of them have established themselves as reliable over the years, and you will see that a vast majority of apps are done for these two. Without further ado, let’s take a look at some of the key differences. **Programming Languages** The first, and the most important difference to understand is what programming languages each of these uses. iOS mainly uses Swift, and Android focuses on either Java or Kotlin. For [hybrid app development](https://potado.co/hybrid-app-development-singapore), developers use Javascript or Dart depending on which platform they choose. Naturally, you need to understand what the needs for building your app are, and based on these needs, you will make a choice. According to the experience of a high number of developers, we can see that building an iOS app is widely considered to be much simpler. Using Swift means that you will not need to invest as much time coding when compared to Java and Kotlin. The reason is that it is much more readable than these two. However, it is worth knowing that Kotlin, as a programming language, is developed as we speak. So, it is not possible that this opinion will change in the future. When Kotlin becomes a more preferable PL, then the whole process may become much simpler for developing Android apps. **Know Your Audience** While it may not seem like that, knowing your audience can make all the difference when choosing one of these platforms. A lot of it depends on your client, right? If the client requires you to build an app that needs to meet certain wishes of its customer, then you need to compare these two platforms and see which one of these can offer you better features. Besides that, demography can play a significant role. For instance, numerous surveys out there have shown that the younger generation in the US prefers to use iOS, while adults and seniors prefer Android mobile phones. Therefore, knowing what age group you want to focus on can help with this decision. Another important thing we want to point out is that women usually prefer to use Apple more than Android. Among males, the situation is completely different. Plus, it is worth knowing that iOS users tend to spend more time on their devices, and if you want your app to be used frequently, this is the way to go. **Budget** Another major aspect for you to focus on is the budget, which needs to be decided upon before you even start making an app. While the mobile app development budget depends on a long string of different elements, it is not impossible to determine it. Be aware, some of them may be unexpected, which can surface when need them the least. Based on your initial budget, and the time and resources you have invested in the production process, you should come up with the price for someone using your app. Have in mind that the average price for Android apps on Google play is around $20, while it can go as far as $89 on average on iOS mobile devices. **Development Tools** One of the key differences between these two platforms is the development tools you will have at your disposal. Android as a platform mainly focuses on using Ecplise. Back in the day, the most popular one was NetBeans. However, almost every developer will tell you that Ecplise is a much more flexible option. In the last couple of years, a new name has appeared on the market, and most [Android developers](https://potado.co/android-app-development-singapore) are using Android studio now. When it comes down to iOS, the situation is much simpler, pretty much all the developers have decided to use XCode, because of all the benefits it can provide them, mainly by making the procedure simpler. These options exclude each other, so it is not possible to come across Java if you are interested in building an iOS app, and vice versa. We recommend you take a look at some tutorials about both of these sides online and see which one of these suits you better. That way, you can be sure that the decision you have made will fit all your needs and preferences. **Complexity** We’ve already mentioned that [developing iOS apps](https://potado.co/ios-app-development-singapore) is simpler than using Android. However, we want to discuss it in greater detail now. The first thing to know is that Apple doesn’t release as many devices and apps in a year, especially when you compare it to Android, which is found on countless devices. As a result, there will be not as many screen dimensions at one time. Therefore, you don’t need to pay attention to a lot of problems that can occur for android apps. So, you will not need to invest as much time and resources to build one of these. With Android, the situation is much more difficult. It is not unknown that Android apps require the developer to update the graphics constantly. As we’ve said, the app itself will be used much more, which results in the need to update it constantly. That doesn’t mean these updates will be costly, but they are there nevertheless, and you should know this. **Summary** Choosing between these two platforms for building your app is not an easy thing to do, especially if you are a beginner. Here, we’ve provided you with some of the most important differences you need to understand when it comes to these two.
jasonlee
1,225,201
Qu’est-ce que “l’open source” et pourquoi je devrais y contribuer ?
L’ Hacktoberfest 2022 est l’occasion de comprendre ce qu’est l’ open source et pourquoi tu devrais y...
0
2022-10-20T14:27:24
https://rherault.fr/blog/open-source-contribution
opensource, tutos, french
--- title: Qu’est-ce que “l’open source” et pourquoi je devrais y contribuer ? published: true cover_image: https://blog.rherault.fr/app/uploads/2022/09/Quest-ce-que-lopen-source-et-comment-y-contribuer-3.png date: 2022-09-26 09:33:53 UTC tags: OpenSource,Tutos,french canonical_url: https://rherault.fr/blog/open-source-contribution --- L’ **Hacktoberfest** 2022 est l’occasion de comprendre ce qu’est l’ **open source** et pourquoi tu devrais y **contribuer** , et donc participer à cet événement qui se déroule durant **tout le mois d’octobre**. ## Qu’est ce que l’open source Un code open source est un code **ouvert** , accessible à tous. À ne pas confondre avec la notion de “code libre”. ## Code libre vs code open source, quelles différences ? La notion de **code libre** est née bien avant celle de l’open source. C’est **Richard STALLMAN** qui a initié ce mouvement. La différence est grande [selon lui](https://www.gnu.org/philosophy/open-source-misses-the-point.fr.html) : > _Think of “free speech”, not “free beer”._ > > <cite>Richard STALLMAN</cite> _ **Pensez à “liberté d’expression” et pas à “entrée libre”** _, la grande différence entre les deux notions résident dans cette phrase. C’est une question de liberté, pas de prix. Ce n’est pas parce que le code est accessible qu’il respecte toutes les libertés résidant dans le code libre : - Liberté de l’utiliser, - Liberté de l’étudier, - Liberté de le modifier et d’en redistribuer des copies, modifiées ou non. Ce n’est pas parce que le code est disponible pour tous qu’il prône les valeurs du libre. De nos jours de nombreuses licences sont disponibles, plus ou moins restrictives. Certaines ne respectant pas le **mouvement du libre**. GitHub a même créer un outil permettant de choisir la licence appropriée à nos besoin : [https://choosealicense.com/](https://choosealicense.com/). ## Pourquoi contribuer ? Il existe une multitude de raisons de vouloir contribuer. Tu utilises un _framework_ que tu apprécies et tu souhaites **apporter ton aide à la communauté**? Tu peux contribuer. Tu rencontres un bug dans un package que tu utilises et tu souhaites le résoudre toi même ? Tu peux contribuer Il n’existe pas de **traduction** pour ta langue ? Tu peux contribuer. Et encore bien d’autres. Le principal **avantage** à la contribution est que tu vas apporter ton aide à la communauté, que tu vas participer activement à l’amélioration d’outils open source que tu utilises quasiment tous les jours. Ensuite, en termes de compétences, tu vas pouvoir découvrir du code existant et donc t’exercer à comprendre et adapter un code que tu n’as pas écrit. Tu vas même pouvoir acquérir de nouvelles compétences. Et si tu contribues à ton framework / CMS / outil préféré tu vas bien mieux comprendre comment il fonctionne, en découvrant le **cœur** de celui-ci. En bref, il n’y a que du bon dans la contribution de projets open source, donc n’hésites pas ! Maintenant si tu as envie d’apporter ta contribution, comment faire ? ## Comment contribuer ? Nous avons vu qu’il existe énormément de manières différentes de contribuer : - Résoudre des **issues** (bug) - Création/Correction de documentations - Faire de la traduction - etc.. Si tu es **débutant⸱e** et que tu n’as jamais fait ce genre de chose, tu peux commencer par consulter [cette page](https://www.digitalocean.com/community/tutorial_series/an-introduction-to-open-source) ou encore [celle ci](https://opensource.guide/fr/how-to-contribute/) (en français) créée par GitHub. Ces tutoriels rapides vont t’apprendre à te servir de _ **git** _ et t’expliquer de A à Z comment apporter ton aide. Pour commencer et pour découvrir le monde de l’open source et de la contribution : - [First Contributions](https://github.com/firstcontributions/first-contributions), un projet pour réaliser ta première PR (Pull Request) - [Awesome For Beginner](https://github.com/mungell/awesome-for-beginners), une liste de **projet cool pour les juniors** - [Good First Issues Github](https://github.com/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22), une liste de toutes les _issues_ notées comme “Good First Issue” Si tu sais déjà comment contribuer et te servir de git tu peux tout de suite te lancer. Mais tout d’abord, il va falloir trouver un projet sur lequel tu veux contribuer et te trouver un problème à résoudre. (c’est souvent l’étape la plus compliquée) Cela peut être le framework, outil, CMS que tu utilise quotidiennement en tant que développeur. Par exemple, si tu souhaites contribuer au projet **Symfony** , il existe une formation gratuite excellente sur [Symfony Casts](https://symfonycasts.com/screencast/contributing) ! Ensuite, il suffit de te rendre dans [l’onglet “Issues” du repository](https://github.com/symfony/symfony/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc). Si tu préfères t’entraîner sur d’autres projets, tu peux consulter ces différentes pages : - [Up For Grabs](https://up-for-grabs.net/#/) - [Projets Hacktoberfest](https://github.com/topics/hacktoberfest) - [OSWC – Projets open source](https://www.oswc.is/search-projects) Ces pages vont te donner des projets où il y a besoin d’aide et tu peux filtrer par langage, facilité de résolution, etc.. Il n’y a plus qu’à trouver une issue, te l’attribuer et commencer à la résoudre ! Maintenant que tu sais comment contribuer, je ne peux que te souhaiter un **Happy Hacktoberfest 2022**![🎉](https://s.w.org/images/core/emoji/14.0.0/72x72/1f389.png) ![Hacktoberfest 2022](https://blog.rherault.fr/app/uploads/2022/09/12t9r8j7n9ynxbdzhs5p-1024x576.webp)
romaixn
1,225,603
Learning Open Source Community
Finding another open source issues After I sent my first PR to open source project, I...
0
2022-10-20T19:30:56
https://dev.to/genne23v/learning-open-source-community-4nji
opensource, javascript, reacttaginput
### Finding another open source issues After I sent my first PR to open source project, I searched through another projects. Since I experienced that it takes some time to find right issues for me, and I need to communicate with maintainer to get an ok to proceed. So, I expressed my intention in some of repos that I could try. The responses were quicker than I thought. I received the first response from [react-tags](https://github.com/react-tags/react-tags). And the rest of responses didn't take more than a day. So I started working on one of issue from **react-tags**. ### First step to new project is tough **react-tags** is a library to create the tagging function easily to React project. Demo is easy to understand. I thought it's a good project and issue for open source beginners like me. But I ran into some setup issues. The first issue was that I couldn't install dependencies because peer dependencies could not be resolved. Later I found out that I need to run my terminal with Rosetta as I use M1 Mac. I know there is a compatibility issue with M1, but I didn't have that thinking at all. After I spent some time to resolve dependency issues, I ran `npm run start` to run the demo project that they set up. What I was seeing was almost empty page on `localhost:3000`. I tried to fix a problem for many hours, then I tried to set up my own dev setup in another React page. The first contribution wasn't easy for me at the beginning either. But I guess this is also important part of development skill to know how to set up various environments. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sgj9xfbeg5vpzoueueh8.png)*I was in an ideal world that has no dependency issues and no installation failures* ### I should check when the issue was written **react-tags** also supports suggestion. When the user inputs a couple of letters, it provides the list of suggestion to autocomplete. The issue that I was assigned is to filter the tags the user already selected. But I found that it's been fixed in current version of library already. I felt frustrated since I already spent many hours to set it up. I didn't check when the issue was written. Actually it was raised in May, 2017. So I posted my comment that the issue seems fixed already. Then, I looked at the other issues in the repository. Now I started to understand that this repo is not as active as it used to be. I decided to pick up another issue due to the amount of time that I spent already. So I found one issue requesting to increase test coverage. ### Writing React test with Enzyme I'm familiar with **Jest** and some of testing methodologies. But I haven't written a unit test for UI component. So I need ed to learn about mock, stub, snapshot and how to set them up as well as how to use `enzyme` library. As always, I had to use many hours to write tests that I wanted. I almost gave up one of the tests that checks the component is removed. But it was an essential part of the component and I really wanted to add that test. As usual, the feeling of achievement was great although my code doesn't do much. I'm done. ### Conclusion I'm not sure if my PR is going to be accepted as the repo hasn't accepted any PR for long time. At least I learned many things about open source project. I know contributing is not always rewarding, but what it's worth is the process of learning. Anybody who is starting to contribute would experience the same things. And I feel good when I close one of code editors that I had worked for many days!
genne23v