id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,875,641
UX ডিজাইনার
UX ডিজাইনার কে যত বেশি কাজে লাগাবেন অর্থাৎ যত বেশি তার থেকে সময় ও শ্রম নিবেন ডেভেলপার টিমের কাজ ততো...
0
2024-06-03T15:41:57
https://dev.to/md_abunaeem_2e8520da49c1/ux-ddijaainaar-53ak
softwaredevelopment, uxdesign, projectmanagement, webdev
UX ডিজাইনার কে যত বেশি কাজে লাগাবেন অর্থাৎ যত বেশি তার থেকে সময় ও শ্রম নিবেন ডেভেলপার টিমের কাজ ততো বেশি সহজ হবে, তত বেশি সময় কম লাগবে এবং ব্যাক এন্ড ফোর্থ অনেক কম হবে। অনেক সময় আমরা সফটওয়্যার ডেভেলপমেন্টের প্ল্যানিং এবং প্রোটোটাইপিং পর্যায়ে UX ডিজাইনার কে কষ্ট কম দেয়ার জন্য (বা তার সময় কম ব্যবহারের জন্য; যেটাই হোক) এভাবে বলে দেই যে Navbar এ যেহেতু অমুক পরিবর্তনের সিদ্ধান্ত হয়েছে সুতরাং আপনি শুধু এক জায়গায় পরিবর্তন করে রাখুন বাকিগুলো ডেভেলপমেন্টের সময় দেখা যাবে। ব্যাস! অ্যামবিগুইটির সূচনা করলাম। কখনো ড্রপ ডাউনের অপশনগুলোতে ডেমো ভ্যালু প্রোপাগেট করে ডিজাইনের কাজ সারিয়ে নেই অথচ কোন এক স্পেসিফিক অপশন সিলেকশন এর সাথে পরের অনেকগুলো স্ক্রিন সিকুয়েন্সিয়ালি জড়িত। জাস্ট এই চিন্তা মাথায় রেখে সামনে এগিয়ে যাই যে কনসেপ্ট তো মাথায় আছেই ডেভেলপমেন্টের টাইমে দেখা যাবে। ব্যাস! ডেভলপারদের জন্য প্রবলেম ক্রিয়েট করে সামনে এগিয়ে গেলাম। একজন Product owner/Project Manager/Technical Manager এর এই ধরনের সূক্ষ্ম বিষয়গুলোতে হার্ড ডিসিশন নেয়া একটা ভালো প্রোডাক্ট বের করে আনার শামিল।
md_abunaeem_2e8520da49c1
1,875,640
Understanding the root causes of stress and burnout in coding
As a software developer with experience in building full stack web applications, I've experienced...
0
2024-06-03T15:41:38
https://dev.to/artisticmk/understanding-the-root-causes-of-stress-and-burnout-in-coding-iki
productivity, beginners, softwaredevelopment, webdev
As a software developer with experience in building full stack web applications, I've experienced first hand how stress and burnout can have a negative impact on my coding productivity and general well-being. Stress is a physical or emotional reaction to a challenging situation, and prolonged stress can lead to burnout, a state of feeling emotionally-exhausted and lacking of motivation. Despite the excitement of building applications and solving problems, the high-pressure and mentally-tasking nature of the work can take its toll on our mental and emotional well-being. That's why I wanted to write this article to bring to light the top causes of stress and burnout that I have identified through processing my experiences with them. By sharing my own experiences and insights, I hope to help other developers stay healthy, happy, and productive in their software development careers. With that, lets dive into the causes of stress and burnout when coding: ### 1. Lack of goal setting and insufficient planning: As a developer, it can be easy to jump right into coding without taking time to set clear objectives and develop a solid plan. This can be overwhelming, especially if the project is large. I use to find myself running into a coding project without a vivid plan and set of objectives. When I used to approach app development like this, I would make decisions intuitively without fully considering the implications. Later on, I would realize my initial approach wasn't the best one, and that would trigger a sense of unaccomplishment, which would cause immense stress and ultimately lead to burnout. This cycle of lack of direction made the development process long and miserable, causing me to loose passion and motivation for the project. In turn, I would often abandon personal projects and move on to another, repeating the same cycle. When we lack clear objectives and fail to give each task the focus and attention it deserves, it's easy to fall into the trap of multitasking - doing multiple things at once. This can be a great source of stress and anxiety, as our brains are not designed to handle multiple complex tasks simultaneously. In my experience, I've learned that taking the time to lay out your plan and set step-by-step objectives before starting a coding project is crucial. You gain a sense of reward and progress as you code and achieve those objectives, which makes you to be happy and keeps you energized. Without clear direction and focus, you tend to easily feel overwhelmed and lose motivation. ### 2. Perfectionism: Many developers strive for perfection in their work, but this mindset can be a double-edged sword. On one hand, perfectionism can drive us to concept unique ideas and create high-quality code and products. On the other hand, it can lead to a sense of unsatisfaction and can lead to burnout. As someone who has struggled with perfectionism, I can tell you that the desire to achieve perfection often comes at a great cost. Have you ever poured all your energy and creativity into a project, only to suddenly doubt its quality and aesthetic appeal once it's completed? The endless cycle of conceptualizations and iterations that follows can be incredibly miserable and lead to burnout. Perfectionism can also lead to setting unrealistic goals and having overly high expectations for the outcome of your work. Many developers strive to build applications that deliver unmatched performance and features, but setting goals that are too ambitious can lead to feelings of disappointment and negative self-judgment. It's important to find a balance between striving for excellence and being realistic about what can be achieved. By setting achievable goals and celebrating our efforts, we can avoid the emotional toll of perfectionism and stay motivated towards building great products. ### 3. Miscommunication: This is a major source of stress and burnout in the software development industry. Every developer at some point has encountered a challenging client or experienced misunderstanding with their team members. Clients may struggle to effectively convey their vision, particularly if they are not familiar with the technical aspects of software development. This can lead to unrealistic demands and breakdowns in communication, causing the developer to feel undervalued or unappreciated. Miscommunication between team members can also hinder effective collaboration, leading to delays, misunderstandings, and a sense of dissatisfaction and burnout. To mitigate these risks, it's important for developers to prioritize clear and effective communication with both clients and team members. Prioritizing emotional-intelligence and empathy in client interactions and team collaborations can foster a more positive and productive work environment. ### In conclusion Stress and burnout are silent killers of productivity in coding that can have a significant impact on the mental and emotional well-being of software developers. Lack of goal setting, perfectionism, and miscommunication are some of the top causes that can lead to stress and burnout. By recognizing and addressing these factors, developers can reduce the risk of burnout and increase their productivity, motivation, and overall well-being.
artisticmk
1,875,645
Enclave Games Monthly Report: May 2024
A lot has happened in May: from announcing Gamedev.js Jam 2024 winners and sending them all the...
0
2024-06-03T15:54:18
https://enclavegames.com/blog/monthly-report-may-2024/
enclavegames, javascript, monthlyreport, gamedev
--- title: Enclave Games Monthly Report: May 2024 published: true date: 2024-06-03 15:39:23 UTC tags: enclavegames,javascript,monthlyreport,gamedev canonical_url: https://enclavegames.com/blog/monthly-report-may-2024/ cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xn793595pwa5bhcw9pu2.png --- A lot has happened in May: from [announcing Gamedev.js Jam 2024 winners](https://gamedevjs.com/competitions/gamedev-js-jam-2024-winners-announced/) and sending them all the prizes, through finally writing about my [burnout](https://end3r.com/blog/burnout/), to looking into the bright future with the newly formed [OP Guild](https://enclavegames.com/blog/op-guild/) which I’m going to lead. ### Games HA! So, SOMETHING actually happened: I’ve upgraded [Enclave Phaser Template](https://github.com/EnclaveGames/Enclave-Phaser-Template/) to Phaser 3.80.1, did the same to the [NSHex Counter](https://nshex.enclavegames.com/counter/), and fixed a few bugs there. I still need to upgrade the way tiles are showed on the screen as this was heavily requested by the community, but I’ll get there. ### Writing A whole bunch of blog posts here and there: - [10.05] Gamedev.js: [Gamedev.js Jam 2024 winners announced!](https://gamedevjs.com/competitions/gamedev-js-jam-2024-winners-announced/) - [11.05] Gamedev.js: [Best entries from all the Challenges in Gamedev.js Jam 2024](https://gamedevjs.com/competitions/best-entries-from-all-the-challenges-in-gamedev-js-jam-2024/) - [13.05] Gamedev.js: [New Gamedev.js Jam 2024 t-shirt: Power!](https://gamedevjs.com/competitions/new-gamedev-js-jam-2024-t-shirt-power/) - [16.05] Enclave Games: [Into the future with OP Guild](https://enclavegames.com/blog/op-guild/) - [20.05] End3r’s Corner: [Burning out… and shifting focus again](https://end3r.com/blog/burnout/) - [20.05] Gamedev.js: [Athena Crisis goes Open Source](https://gamedevjs.com/games/athena-crisis-goes-open-source/) - [22.05] Gamedev.js: [Frontend Nation 2024 and workshop ticket giveaway](https://gamedevjs.com/events/frontend-nation-2024-and-workshop-ticket-giveaway/) - [24.05] Gamedev.js: [Bytebeat collection](https://gamedevjs.com/tools/bytebeat-collection/) - [27.05] Enclave Games: [Gamedev.js Weekly newsletter gets… a mobile template!](https://enclavegames.com/blog/gamedevjs-weekly-template/) - [27.05] Gamedev.js: [Rogue Engine](https://gamedevjs.com/tools/rogue-engine/) - [29.05] Gamedev.js: [Fallout 2 remake in 3D](https://gamedevjs.com/games/fallout-2-remake-in-3d/) - [31.05] Gamedev.js: [All things Phaser](https://gamedevjs.com/tools/all-things-phaser/) I’ll skip listing Polish ones published on [NeuroshimaHex.pl](https://neuroshimahex.pl/) and [TataDeveloper](https://tatadeveloper.end3r.com/). ![Burnout](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xapj7vkizzxdwyyftss3.jpg) ### Design Not much actually. ### Events Almost all the prizes from **Gamedev.js Jam 2024** were sent out, I’m only waiting for the printing of Badlucky t-shirts to ship them to the winners. Also, I’ve already created a landing page for [Gamedev.js Jam 2025](https://gamedevjs.com/jam/2025/), and it’s dedicated [2025 Itch page](https://itch.io/jam/gamedevjs-2025). ![OP Guild](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxl98ace5arkfb0ke4b2.png) ### Other The [Gamedev.js Weekly](https://gamedevjsweekly.com/) newsletter, beside being regularly sent out (same as the [Phaser World](https://phaser.io/community/newsletter)), got a [new mobile template](https://enclavegames.com/blog/gamedevjs-weekly-template/) after running for more than a decade. ### Plans for the next month More coding, cleaning the TODO list, and diving into [OP Guild](https://enclavegames.com/blog/op-guild/) duties. Plus the preparations to the **13th edition** of the [js13kGames](https://js13kgames.com/) competition.
end3r
1,875,638
ServBay 1.3.5 Official Release: Significant Updates and Enhancements
June 1, 2024 - The ServBay team is pleased to announce the release of the latest version 1.3.5. This...
0
2024-06-03T15:36:34
https://dev.to/servbay/servbay-135-official-release-significant-updates-and-enhancements-25kn
webdev, programming, productivity, php
June 1, 2024 - The [ServBay](https://www.servbay.com) team is pleased to announce the release of the latest version 1.3.5. This update includes key software package upgrades, updates to the ServBay Runtime and ServBay Development Library, and introduces several new features and important optimizations aimed at providing users with a more efficient and convenient experience. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/safc7zy3vgovm28352rh.png) ### Key Updates and Optimizations To enhance performance, in ServBay 1.3.5, the Opcache, XDebug, and MongoDB PHP modules are no longer loaded by default. Users can manually enable these modules by editing the respective configuration files in /Applications/ServBay/package/etc/php: - **Opcache**: Edit the `conf.d/opcache.ini` file and uncomment the relevant lines. - **XDebug**: Edit the `conf.d/xdebug.ini` file and uncomment the relevant lines. - **MongoDB**: Edit the `conf.d/mongodb.ini` file and uncomment the relevant lines. This adjustment allows for more efficient use of system resources, enabling users to load the necessary modules based on their actual needs, thereby improving system performance. [**Click for Early Access**](https://www.servbay.com) ### ServBay Runtime and ServBay Development Library Updates As part of our ongoing efforts to add new features and improve security, the ServBay Runtime and ServBay Development Library have been upgraded: - **ServBay Runtime**: Updated to version 1.0.7 for Intel (x86_64) architecture and 1.1.7 for Apple Silicon (ARM, M1/M2/M3) architecture. - **ServBay Development Library**: Updated to version 1.0.7 for Intel (x86_64) architecture and 1.1.7 for Apple Silicon (ARM, M1/M2/M3) architecture. - **New Packages for ServBay Runtime**: Including scws, protobuf, protobuf-c, and postgis dependency modules, providing developers with more tool options. Users are encouraged to upgrade to the latest version of the Runtime through the "Services" panel to receive the latest security updates and performance optimizations. ### Software Package Upgrades This update includes upgrades to several important software packages, including PHP, Node.js, PostgreSQL, and MariaDB. Specific versions are as follows: - **PHP**: Updated to versions 8.2.19, 8.3.7, and 8.4.0-dev-20240531 (requires an upgrade to ServBay Runtime 1.1.7 or 1.0.7) - **Node.js**: Updated to versions 22.2.0, 20.14.0, and 18.20.3 - **PostgreSQL**: Updated to versions 16.3, 15.7, 14.12, 13.15, and 12.19 (requires an upgrade to ServBay Runtime 1.1.7 or 1.0.7) - **MariaDB**: Updated to versions 11.5.1, 11.4.2, 11.2.4, 11.1.5, 11.0.6, 10.11.8, 10.6.18, 10.5.25, and 10.4.34 These upgrades ensure that users can take advantage of the latest software features and security patches, enhancing system stability and security. ### New Features ServBay 1.3.5 introduces several new extension modules: - **PostgreSQL Plugin Support**: Including PostGIS, pgrouting, pgvector, pg_jieba, and zhparser, providing users with more powerful database extension capabilities. - **New PHP Modules**: Including Scws, Swoole, Phalcon, and imagick, further enhancing PHP functionality and extensibility. These new features require an upgrade to the latest version of ServBay Runtime and the reinstallation of the corresponding software packages. The upgrade process is smooth and will not result in data loss, so users can upgrade with confidence. Additionally, ServBay 1.3.5 adds support for the Turkish language and includes help links and a dedicated ServBay Pro support email on the About page, making it easier for users to get more help and support. On the host page, users can now open their editors with a single click. This feature supports popular editors such as VSCode, PHPStorm, and Sublime Text, greatly enhancing development efficiency. ### Bug Fixes and System Optimizations This update also fixes the issue where redis.sh could not exit or restart when setting a password, and optimizes the logic of the Helper upgrade script, the Helper connection check logic, and the dnsmasq configuration writing logic. These optimizations further enhance system stability and user experience. Can’t wait to code? **[Download here!](https://www.servbay.com)** --- Big thanks for sticking with ServBay. Your support means the world to us. Got questions or need a hand? Our tech support team is just a shout away. Here's to making web development fun and fabulous! 🥳 [If you want to get the latest information, follow [X(Twitter)](https://twitter.com/ServBayDev) and [Facebook](https://www.facebook.com/ServBay.Dev). If you have any questions, our staff would be pleased to help, just join our [Discord](https://talk.servbay.com/).
servbay
1,875,636
Brod; boss kafka in elixir
Elixir has rapidly gained recognition as a powerful language for developing distributed systems. Its...
0
2024-06-03T15:32:32
https://dev.to/darnahsan/brod-boss-kafka-in-elixir-eo
elixir, kafka, cap, distributedsystems
Elixir has rapidly gained recognition as a powerful language for developing distributed systems. Its concurrency model, based on the Erlang VM (BEAM), allows developers to build highly scalable and fault-tolerant applications. One of the areas where Elixir shines is in reading messages from Kafka using Broadway to build concurrent and multi-stage data ingestion and processing pipelines. However, when it comes to producing messages to Kafka, Elixir's ecosystem seems to lack a unified focus, leading to some confusion. Elixir's inherent support for concurrency and fault tolerance makes it an ideal choice for distributed systems. The language's lightweight process model, along with features like supervisors and the actor model, enables developers to create systems that can handle massive loads and recover gracefully from failures. This makes Elixir particularly well-suited for distributed systems, where reliability and performance are crucial. In the Elixir ecosystem, there is a clear focus on consuming messages from Kafka. Libraries like Broadway make it easy to build sophisticated data ingestion pipelines. Broadway allows developers to define multi-stage pipelines that can process large volumes of data concurrently, leveraging Elixir's strengths in concurrency and fault tolerance. While Broadway excels at consuming messages, there's a slight terminology hiccup. The method for sending messages is called produce, which might be a bit confusing. It's important to remember that Broadway focuses on the consumer side of the Kafka equation. However, when it comes to producing messages to Kafka, Elixir's libraries present a more fragmented landscape. There are three primary Kafka libraries available for Elixir developers: brod, kaffe, and kafka_ex. Each of these libraries has its own strengths and use cases. brod: An Erlang client for Kafka, brod is known for its robustness and performance. It operates seamlessly within the BEAM ecosystem, taking advantage of Erlang's mature infrastructure for building distributed systems. However, working with brod can be cumbersome and requires a fair amount of setup. Despite this, it remains a reliable and performant choice for Kafka integration. kaffe: A wrapper around brod, kaffe simplifies the process of interacting with Kafka, particularly for those using Heroku Kafka clusters. By abstracting away some of the complexities of brod, kaffe makes it easier for developers to get started with Kafka in Elixir. It focuses on providing a more user-friendly experience while still leveraging the underlying power of brod. kafka_ex: Currently in a state of transition, kafka_ex is undergoing significant changes that are not backward compatible. This situation can be likened to Python's shift from version 2 to version 3, where developers faced considerable breaking changes. While kafka_ex has been a popular choice, the ongoing transition means developers need to be cautious about using it in production until the changes stabilize. Keeping in mind all the above and scouring elixirforum.com the most shared opinion was to go with brod and write a wrapper around it. Something that lacks is over how to set it up in elixir and in particular phoenix as brod is an erlang library. To setup brod in phoenix you need to define a supervisor process that can than be added to your phoenix supervisor like other process to be part of your supervision tree. To setup the brod client with sasl and ssl as a supervisor it can be done as below. ```elixir defmodule Maverick.Kafka.BrodSupervisor do @moduledoc """ Maverick.Kafka.BrodSupervisor """ use Supervisor alias Maverick.Kafka, as: Kafka def start_link(_config) do Supervisor.start_link(__MODULE__, [], name: __MODULE__) end def init(_config) do :ok = :brod.start_client( Kafka.hosts(), Kafka.brod_client(), ssl: true, ssl_options: [ # from CAStore package cacertfile: CAStore.file_path(), verify_type: :verify_peer, customize_hostname_check: [ match_fun: :public_key.pkix_verify_hostname_match_fun(:https) ] ], sasl: Kafka.authentication(), auto_start_producers: true, reconnect_cool_down_seconds: 10, default_producer_config: [ required_acks: -1, partition_buffer_limit: 1024 ] ) children = [] Supervisor.init(children, strategy: :one_for_one) end end ``` After this you can wrap the message producer for convenience ```elixir defmodule Maverick.Kafka.Brod do @moduledoc """ Maverick.Kafka.Brod """ use Retry.Annotation @retry with: exponential_backoff() |> randomize() |> expiry(10_000) def produce(client, topic, partition, key, msg) do :brod.produce_sync(client, topic, partition, key, msg) end end ``` I highly recommend to check out `retry` hex package it will save you from all those network issues as network is always dubious. Before I close this post the thing I would like to highlight is the `required_acks` settings and what it means the required_acks (also known as acks) setting determines how many acknowledgements the producer requires the leader to have received before considering a request complete. This setting has a significant impact on the durability and consistency guarantees of your messages. The required_acks can be set to different values: 0: The producer does not wait for any acknowledgement from the server at all. This means that the producer will not receive any acknowledgement for the messages sent, and message loss can occur if the server fails before the message is written to disk. 1: The leader writes the record to its local log but responds without waiting for full acknowledgement from all followers. This means that the message is acknowledged as soon as the leader writes it, but before all replicas have received it. -1 (or all): The leader waits for the full set of in-sync replicas to acknowledge the record. This is the strongest guarantee and means that the message is considered committed only when all in-sync replicas have acknowledged it. In the context of the brod library, setting required_acks: -1 ensures that: The producer waits for acknowledgements from all in-sync replicas before considering the message successfully sent. This provides the highest level of durability since the message will be available even if the leader broker fails after the message is acknowledged. I hope this makes it simple for people looking to work with Kafka using Elixir.
darnahsan
1,875,635
Extracting Text from Uploaded Files in Node.js: A Continuation
Introduction In our previous article, we covered the basics of uploading files in a Node.js...
0
2024-06-03T15:30:21
https://dev.to/luqmanshaban/extracting-text-from-uploaded-files-in-nodejs-a-continuation-416j
node, tutorial, webdev, filehandling
**Introduction** In our previous [article](https://dev.to/luqmanshaban/how-to-upload-a-file-in-nodejs-a-step-by-step-guide-5cf6), we covered the basics of uploading files in a Node.js application. Now, let’s take it a step further by extracting text from uploaded files. This tutorial will guide you through using the `officeparser` library to parse and extract text from office documents, such as PDFs, in a Node.js environment. **Step 1**: Install the `officeparser` Library First, install the `officeparser` library if you haven’t already: `npm install officeparser` **Step 2**: Create the Extraction Function Next, create a function to extract text from the uploaded file. Here’s the code snippet: ``` import { parseOfficeAsync } from "officeparser"; async function extractTextFromFile(path) { try { const data = await parseOfficeAsync(path); return data.toString(); } catch (error) { return error; } } const fileText = await extractTextFromFile('files/Luqman-resume.pdf'); console.log(fileText); ``` This function utilizes `parseOfficeAsync` to asynchronously read and extract text from the specified file path. If successful, it converts the data to a string and returns it; otherwise, it catches and returns any errors encountered. **Step 3**: Integrate with Node.js endpoints You can follow the tutorial in this [Article](https://dev.to/luqmanshaban/how-to-upload-a-file-in-nodejs-a-step-by-step-guide-5cf6) to create an endpoint that supports file upload. **Conclusion** By following this tutorial, you’ve extended your Node.js application to extract text from these files. This can be particularly useful for applications requiring document processing or data extraction from user-uploaded files. Stay tuned for more advanced features and enhancements in our next article! — - Stay Updated! If you enjoyed this tutorial and want to stay updated with more tips and guides, [subscribe to our newsletter](https://mailchi.mp/9275d6947c46/luqmandev) for the latest content straight to your inbox.
luqmanshaban
1,873,949
Achieve Game-Like Scaling in Discord Activities and Web Apps Using CSS
Discord Activities are awesome! They run inside Discord and can be all sorts of cool things like...
0
2024-06-03T15:30:00
https://dev.to/waveplay/achieve-game-like-scaling-in-discord-activities-and-web-apps-using-css-874
javascript, programming, node, discord
**[Discord Activities](https://support.discord.com/hc/en-us/articles/4422142836759-Activities-on-Discord)** are awesome! They run inside Discord and can be all sorts of cool things like games, quizzes, and more. But if you're developing using web technologies rather than game engines, you might run into a common problem: scaling. In this article, we'll show you how to achieve game-like scaling in your Discord Activities and web apps using plain CSS. This will allow you to create responsive designs that look great on any device, **[just like a game](https://docs.unity3d.com/2020.1/Documentation/Manual/HOWTO-UIMultiResolution.html)**! ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ak6x8twhs5fbnc7q15x.png) ## The Problem with Scaling Screen sizes vary widely, even among devices of the same type. This can make it challenging to create a design that looks good on all screens. If you've ever built a Discord Activity using HTML and CSS, you've probably encountered this issue. Suppose you have a button with a fixed size. On a large desktop screen, the button might look tiny, while on a smaller screen, it might be too big and take up a large portion of the screen. That's fine for standard web apps, but for Discord Activities, you may want everyone to have a similar experience, regardless of their screen size. This is even more of a problem when you minimize the activity window or navigate elsewhere in Discord. Your page might look tiny and miss out a lot of the content, or it might be too large and require scrolling to see everything! Before scaling, content looks huge and most of it is cut off: ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dui0h7pkqqs7htb6r9ql.png) After scaling, content fits the screen and is fully visible: ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qe2qhertaml03di76vdr.png) ## Create a Discord Activity Project **[Create a Discord Activity project](https://dev.to/waveplay/how-to-build-a-discord-activity-easily-with-robojs-5bng)** if you don't have one already: ```bash npx create-robo my-activity -k activity ``` We'll be using **[React](https://react.dev)** and **[TypeScript](https://docs.roboplay.dev/robojs/typescript)** for this example, but you can apply the same principles to any frontend framework or vanilla **[HTML](https://developer.mozilla.org/en-US/docs/Web/HTML)** and **[CSS](https://developer.mozilla.org/en-US/docs/Web/CSS)**. We highly recommend using **[create-robo](https://docs.roboplay.dev/cli/create-robo)** to create a **Discord Activity** project. **[Robo.js](https://roboplay.dev/docs)** providers a lot of features and tools to help you build your activity faster and more efficiently, such as **[Multiplayer Support](https://dev.to/waveplay/how-to-add-multiplayer-to-your-discord-activity-lo1)**, **[Easy Hosting](https://docs.roboplay.dev/discord-activities/hosting)**, **[Streamlined Tunnels](https://docs.roboplay.dev/discord-activities/tunnels)**, **[Built-in Database](https://docs.roboplay.dev/robojs/flashcore)**, **[Plugin System](https://docs.roboplay.dev/plugins/overview)**, and so much more! {% embed https://dev.to/waveplay/how-to-build-a-discord-activity-easily-with-robojs-5bng %} ## Achieving Game-Like Scaling First, decide on a base resolution for your activity. This will be the resolution you design your activity for. For this example, let's use `1280x720`. This is a common resolution for games and will work well for our purposes, especially since Discord enforces this aspect ratio for activities. In game design, developers often create assets for a **[base resolution](https://gamemaker.io/en/tutorials/the-basics-of-scaling-the-game-camera)** and scale up or down from there to fit the player's screen. This ensures consistency in the visual experience. We can apply the same principle to our Discord Activities and web apps using CSS via the `transform` property. Next, let's create a wrapper component that will handle the scaling for us. We'll use the `transform` property to scale the content based on the user's screen size. Here's a simple example: ```tsx // src/components/ScaleProvider.tsx import { createContext, useContext, useState, useEffect, ReactNode } from 'react' interface ScaleContextType { scale: number } const ScaleContext = createContext<ScaleContextType | undefined>(undefined) export const useScale = (): ScaleContextType => { const context = useContext(ScaleContext) if (!context) { throw new Error('useScale must be used within a ScaleProvider') } return context } interface ScaleProviderProps { baseWidth: number baseHeight: number children: ReactNode | ReactNode[] } export const ScaleProvider = (props: ScaleProviderProps) => { const { baseWidth, baseHeight, children } = props const [scale, setScale] = useState(1) useEffect(() => { const handleResize = () => { const scaleWidth = window.innerWidth / baseWidth const scaleHeight = window.innerHeight / baseHeight setScale(Math.min(scaleWidth, scaleHeight)) } handleResize() window.addEventListener('resize', handleResize) return () => window.removeEventListener('resize', handleResize) }, [baseWidth, baseHeight]) return ( <ScaleContext.Provider value={{ scale }}> <div style={{ transform: `scale(${scale})`, transformOrigin: 'top left', width: '100vw', height: '100vh' }} > <div style={{ width: baseWidth, height: baseHeight, display: 'flex', alignItems: 'center', justifyContent: 'center' }} > {children} </div> </div> </ScaleContext.Provider> ) } ``` Now, wrap your main app component with the `ScaleProvider`: ```tsx // src/app/App.tsx import { DiscordContextProvider } from '../hooks/useDiscordSdk' import { Activity } from './Activity' import { ScaleProvider } from './ScaleProvider' import './App.css' export default function App() { return ( <DiscordContextProvider> <ScaleProvider baseWidth={1280} baseHeight={720}> <Activity /> </ScaleProvider> </DiscordContextProvider> ) } ``` This `ScaleProvider` component will scale its children based on the user's screen size. It calculates the scale factor by dividing the screen width and height by the base resolution. It then applies this scale factor to the content using the `transform` property. Transforms run on the GPU, so they're very efficient and won't cause performance issues! If you need to reference the scale factor in your components, you can use the `useScale` hook. ```tsx const { scale } = useScale() ``` ## Conclusion Now, your activity will scale up or down based on the user's screen size, providing a consistent experience for everyone. You can design your activity for the base resolution and trust that it will look great on any screen! Don't forget to **[join our Discord server](https://roboplay.dev/discord)** to chat with other developers, ask questions, and share your projects. We're here to help you build amazing apps with **Robo.js**! 🚀 {% cta https://roboplay.dev/discord %} 🚀 **Community:** Join our Discord Server {% endcta %} Our very own Robo, **Sage**, is there to answer any questions about Robo.js, Discord.js, and more! {% embed https://roboplay.dev/discord %} {% embed https://dev.to/waveplay/how-to-add-multiplayer-to-your-discord-activity-lo1 %}
waveplay-staff
1,875,634
Add Visual Studio Code to your OSX zsh PATH
After installing Visual Studio Code, it will not automatically be added to your terminal path, so you...
0
2024-06-03T15:27:10
https://dev.to/almatins/add-visual-studio-code-to-your-osx-zsh-path-63c
osx, zsh, vscode, path
After installing Visual Studio Code, it will not automatically be added to your terminal path, so you will need to add it manually. here is how. Open your terminal and run this command in your terminal ```bash cat << EOF >> ~/.zprofile # Add Visual Studio Code (code) export PATH="\$PATH:/Applications/Visual Studio Code.app/Contents/Resources/app/bin" EOF ``` Restart your terminal, and you should now be able to open Visual Studio Code directly from your terminal using this command `% code .` This will open the current folder in your Visual Studio Code. That’s it! Resources: [Visual Studio Code for Mac](https://code.visualstudio.com/docs/setup/mac)
almatins
1,875,633
Introduction to Shell Scripting in Linux- DevOps Prerequisite #3
Introduction to Shell Scripting in Linux Shell scripting is a powerful way to automate repetitive...
0
2024-06-03T15:25:30
https://dev.to/iaadidev/introduction-to-shell-scripting-in-linux-devops-prerequisite-3-1nd3
shellscripting, linux, devops, learninginpublic
### Introduction to Shell Scripting in Linux Shell scripting is a powerful way to automate repetitive tasks and manage system operations in Linux. A shell script is a file containing a series of commands that are executed by the shell, the command-line interpreter in Linux. This blog will introduce you to the basics of shell scripting, provide examples of common tasks, and explain key concepts to get you started. ### Why Shell Scripting? Shell scripting offers numerous benefits: 1. **Automation**: Automate repetitive tasks to save time and reduce errors. 2. **Efficiency**: Perform complex operations with simple scripts. 3. **System Administration**: Manage system operations like backups, monitoring, and updates. 4. **Custom Solutions**: Create tailored solutions for specific needs. ### Getting Started #### Creating Your First Script To create a shell script, open a text editor and type your commands. Save the file with a `.sh` extension. Here's a simple example: ```bash #!/bin/bash echo "Hello, World!" ``` The `#!/bin/bash` line is called a shebang, which tells the system to use the Bash shell to execute the script. #### Making the Script Executable Before running your script, you need to make it executable: ```bash chmod +x your_script.sh ``` You can then run the script using: ```bash ./your_script.sh ``` ### Basic Concepts #### Variables Variables store data that can be used later in the script. Assigning a value to a variable is simple: ```bash #!/bin/bash name="John" echo "Hello, $name!" ``` #### Conditional Statements Conditionals allow you to execute commands based on certain conditions: ```bash #!/bin/bash echo "Enter a number:" read number if [ $number -gt 10 ]; then echo "The number is greater than 10." else echo "The number is 10 or less." fi ``` #### Loops Loops are used to repeat commands. Here's an example of a `for` loop: ```bash #!/bin/bash for i in {1..5}; do echo "Iteration $i" done ``` And an example of a `while` loop: ```bash #!/bin/bash count=1 while [ $count -le 5 ]; do echo "Count: $count" count=$((count + 1)) done ``` #### Functions Functions help you organize your script into reusable chunks of code: ```bash #!/bin/bash greet() { echo "Hello, $1!" } greet "Alice" greet "Bob" ``` ### Real-World Examples #### Backup Script A simple script to back up a directory: ```bash #!/bin/bash source_dir="/path/to/source" backup_dir="/path/to/backup" timestamp=$(date +%Y%m%d%H%M%S) backup_file="backup_$timestamp.tar.gz" tar -czf $backup_dir/$backup_file $source_dir echo "Backup of $source_dir completed. File: $backup_file" ``` #### System Monitoring Script A script to monitor disk usage and send an alert if usage exceeds a threshold: ```bash #!/bin/bash threshold=80 usage=$(df / | grep / | awk '{print $5}' | sed 's/%//') if [ $usage -gt $threshold ]; then echo "Disk usage is above $threshold%. Current usage: $usage%" | mail -s "Disk Usage Alert" user@example.com fi ``` ### Conclusion Shell scripting is a versatile tool that can greatly enhance your productivity as a Linux user or administrator. By mastering the basics of variables, conditionals, loops, and functions, you can automate a wide range of tasks and streamline your workflow. Start experimenting with your own scripts and explore the vast capabilities that shell scripting offers. Happy scripting!
iaadidev
1,875,630
Add DS-U02P Webcam to Debian 12
So, I reinstalled my personal computer recently with Debian 12 and it runs smoothly and as expected....
0
2024-06-03T15:24:20
https://dev.to/almatins/add-ds-u02p-webcam-to-debian-12-1anf
dsu02p, webcam, debian, firmware
So, I reinstalled my personal computer recently with Debian 12 and it runs smoothly and as expected. Then, I try to connect my **DS-U02P** web camera from **Hikvision**. I tested it using Cheese but it became unstable and I could not get it to work no matter what. I also tried several ways to test the webcam and still facing the same issues. Then, I try to find out if there is firmware for the web camera out there. After getting the firmware downloaded. I found the wiki page for Debian 12 to install the firmware. Based on the document, I need the `/usr/local/lib/firmware` folder and put my downloaded firmware file (the one with `.bin` extension) there. Since I did not have the folder yet, then I needed to create it. After that, I copy the firmware file to that folder and run the below command. `$ sudo update-initramfs -c -k all` After the command is completed. I tested the web camera and it works as expected. To make sure, then I restarted my machine and did the test again. the web camera now worked as expected. Hopefully, you found this post useful. Cheers!
almatins
1,875,567
Deploy MongoDB Replica Set on K8s
So, I set a new k8s cluster on DigitalOcean recently, and I want to share the way I deploy my MongoDB...
0
2024-06-03T15:18:23
https://dev.to/almatins/deploy-mongodb-replica-set-on-k8s-4e0d
kubernetes, mongodb, replicaset, deploy
So, I set a new k8s cluster on DigitalOcean recently, and I want to share the way I deploy my MongoDB Replica Set on the k8s cluster. Let’s start it. ## Storage Class I use Longhorn instead of the default do-block-storage provided by DigitalOcean as the default. I set up the longhorn using the helm chart. Since I use Rancher to manage my cluster, I install Longhorn from the App menu and choose Longhorn with the default value. super simple! ## Cluster IP Service To create a statefulset, we will need to provide the serviceName so, let’s create a ClusterIP service for our MongoDB. The `mongodb-clusterip.yaml` file looks like this. ```yaml apiVersion: v1 kind: Service metadata: annotations: field.cattle.io/description: MongoDB Cluster IP name: mongodb-clusterip namespace: default spec: ports: - name: mongo port: 27017 protocol: TCP targetPort: 27017 selector: app: mongodb type: ClusterIP ``` Then execute the kubectl apply command `kubectl apply -f mongodb-clusterip.yaml` ## Secret We will save the mongodb root user and mongodb root password in the Secret, so we need to create a new Secret for our MongoDB. The `mongodb-secret.yaml` file looks like this ```yaml apiVersion: v1 data: password: base6StringVersionOfpassword user: base6StringVersionOfuser kind: Secret metadata: name: mongodb-secret namespace: default ``` Then execute the kubectl apply command `kubectl apply -f mongodb-secret.yaml` ## StatefulSet Now the last thing that we need to execute is the statefulset for our MongoDB. The mongodb-statefulset.yaml file looks like this ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb namespace: default spec: replicas: 2 selector: matchLabels: app: mongodb serviceName: mongodb-clusterip template: metadata: namespace: default labels: app: mongodb spec: containers: - args: - '--dbpath' - /data/db command: - mongod - '--bind_ip_all' - '--replSet' - rs0 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: key: user name: mongodb-secret - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: key: password name: mongodb-secret image: mongo:6.0.11 imagePullPolicy: IfNotPresent name: mongodb-c volumeMounts: - mountPath: /data/db name: mongodb-pvc volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongodb-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi storageClassName: longhorn ``` Then execute the kubectl apply command `kubectl apply -f mongodb-statefulset.yaml` you can change the replica number with your own. I use 2 here. After a few minutes, you should see the statefulset created. You can see it in your Rancher or using the `kubectl get` command. Let’s make sure that the Mongodb is running using kubectl exec command below. `kubectl --namespace=default exec -ti mongodb-0 -- mongosh` Then you should get the mongosh prompt opened. For the second replica, you can do the same by changing the `mongodb-0` with `mongodb-1` instead. ## Configure Replica Set To configure MongoDB replica set. First we need to open the `mongodb-0` (first pod of the MongoDb) using above command. Then we use the `rs.initiate()` command as below ``` rs.initiate( { _id: "rs0", members: [ { _id: 0, host: "mongodb-0.mongodb-clusterip.default.svc.cluster.local:27017" }, { _id: 1, host: "mongodb-1.mongodb-clusterip.default.svc.cluster.local:27017" } ] } ) ``` if you notice that the pattern is `<pod>.<service>.<namespace>.svc.cluster.local:port` Then try to add some data to the test db using the query `db.test.insertOne({ testFromMaster: "true" })` Now, exit from the master replica and open the mongosh in the secondary replica using the same command `kubectl --namespace=default exec -ti mongodb-1 -- mongosh` You will notice that the prompt is displaying the secondary. To start reading the data changes from the primary replica, we need to run the command `db.getMongo().setReadPref("primaryPreferred")` Then we can test to query the test data that we create before using the query `db.test.find({})` Now, you should see the secondary also has the test data. That’s it. hopefully, you found this useful. Cheers.
almatins
1,875,629
Meme Monday
*Say That Again I Dare You * Source
0
2024-06-03T15:24:05
https://dev.to/td_inc/meme-monday-1npd
ai, memes, techmemes, technology
**Say That Again I Dare You ** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/we8wh4fwgxsqf0s3462b.jpg) [Source](https://imgflip.com/i/85l0k2)
td_inc
1,875,627
Automated Acceptance Testing with Robot Framework and Testkube
Introduction Testing an application's capability usually requires setting up a...
0
2024-06-03T15:23:22
https://testkube.io/learn/automated-acceptance-testing-with-robot-framework-and-testkube
acceptancetesting, robotframework, kubernetes
## Introduction Testing an application's capability usually requires setting up a production-like environment to run tests with realistic insights into the functionality and performance of the application. Tests can be both functional and non-functional, all depending on the requirements on the application and its infrastructure. Teams usually prefer a testing solution that can be easily integrated into the existing CI/CD infrastructure without much work. But what if your testing tool can not be easily automated with your CI/CD solution or run in your existing Kubernetes infrastructure? We decided to explore Testkube, which can take the test as code for any testing tool, automate test execution and integrate with CI/CD, and generate artifacts in Kubernetes. This would allow us to perform testing seamlessly and leverage the benefits of Kubernetes. So let's learn in this blog about acceptance testing with the Robot Framework and how to automate it with Testkube. ## What is Acceptance Testing? Acceptance testing is a phase in software development where the software is tested to determine if it meets the requirements specified by the client or stakeholders. It's usually performed after system testing and before deployment. Acceptance tests are typically written from a user's perspective and aim to verify that the system behaves as expected in real-world scenarios. Acceptance testing also allows you to ensure software complies with legal and regulatory requirements. This reduces the risk of legal consequences or penalties due to non-compliance, especially in certain industries, such as healthcare or finance. Other testing practices, such as API, load, etc., contribute to different aspects of software quality, such as performance, reliability, and interoperability, and are usually done before acceptance testing. [The Robot Framework](https://robotframework.org/) is an acceptance testing tool that is easy to write and manage due to its key-driven approach. Let us learn more about the Robot Framework to enable acceptance testing. ## Acceptance Test with Robot Framework Robot Framework is an open-source, keyword-driven test automation framework that supports Python, Java, and .NET and is widely used for acceptance testing, acceptance test-driven development (ATDD), and robotic process automation (RPA). It provides a highly readable, tabular syntax for writing test cases in a plain-text format, making it accessible to both technical and non-technical users. Robot Framework stands out for the following reasons: 1. **Keyword-Driven Approach**: In Robot Framework, test cases are defined using keywords that represent the validation. These keywords are organized into test suites, which are further structured into test cases. 2. **Test Data and Variables**: Test cases in Robot Framework are written in a tabular format using plain-text files, typically in files with the `.robot` extension. 3. **Built-in and Custom Keywords**: Robot Framework offers a set of built-in keywords for performing common actions such as clicking buttons, entering text, verifying text, etc. It also allows users to define their custom keywords. 4. **Library Support**: Robot Framework supports integration with external libraries written in Python, Java, .NET, and other languages, allowing users to extend its functionality as needed. These libraries can provide additional keywords for interacting with specific technologies, such as web browsers, APIs, databases, and more. Test cases written in Robot Framework can be executed using the `robot` command-line tool or through various integrated development environments (IDEs) and continuous integration (CI) systems. During execution, Robot Framework generates detailed reports and logs, highlighting the status of each test case and providing insights into test results and any encountered issues. ## Running Acceptance tests in Kubernetes While running Robot Framework tests against legacy/non-containerized applications is usually straight-forward, there are several challenges associated with running these tests against a containerized application infrastructure built on Kubernetes: - **Resource Management**: Kubernetes manages containerized applications, including their resource allocation and scaling. However, scaling requires ensuring optimal resource utilization for Robot Framework test execution. This would be challenging to deal with, especially in case of varying test workloads and resource requirements. - **Test Isolation**: Ensuring proper isolation for individual test executions can be complex while running multiple tests concurrently. It will require careful consideration to avoid resource conflict and interference between tests. - **Persistent Data Handling**: Robot Framework may require access to persistent data, such as test artifacts, test data files, or external dependencies. Managing persistent data in a Kubernetes environment, including storage provisioning, volume mounting, and data synchronization, can be challenging, especially when dealing with distributed test executions. - **Secure Access**: Access to applications or components running in Kubernetes can be severely constrained for security reasons, making it difficult to run tests against them with tools that are not running in the cluster themselves. Relaxing network policies for the sake of testing is not always an option. Thus, when using the Robot Framework for acceptance testing of your containerized applications, you will require a test-execution framework that eases the life of a developer and can orchestrate Robot Framework tests with all the above caveats in mind. Fortunately,there is such a framework specifically targeted at running any kind of testing tool, including the Robot Framework, in a containerized application infrastructure: Testkube. ## Acceptance Testing with Testkube Testkube is a test orchestration platform that supports the execution of any testing tool in a containerized application infrastructure. It leverages Kubernetes to provide a scalable and consistent execution environment for all your tests, and includes a unified dashboard for test definition, reporting, and troubleshooting. To run tests in Testkube, one defines Test Workflows for managing the execution lifecycle of individual test(s), which will then be executed in your Kubernetes environment using the corresponding testing tool image. Let us understand more about the Test Workflows feature. ### Test Workflows Test Workflows provide a declarative approach for defining test execution lifecycles. It leverages Kubernetes-native capabilities to orchestrate and execute tests, collect real-time artifacts, and display them on the Testkube dashboard. A Test Workflow is defined as a single YAML configuration that is stored as a custom resource in your Kubernetes cluster(s), [Test Workflows](https://testkube.io/learn/getting-started-with-test-workflows-for-kubernetes-testing) allow integration with existing testing tools and GitOps pipelines, and also provide [templates](https://docs.testkube.io/articles/test-workflow-templates/) for standardized testing configurations adaptable to various needs. ### Execute Acceptance Testing in Testkube To run a Robot Framework acceptance test with Testkube, you will need to define a corresponding TestWorkflow that uses a prepackaged Robot container image to run the test cases that you have defined - let's have a look at how that works. We will see two demos here using the Robot Framework in Testkube using the sample example, [Restful Booker](https://docs.robotframework.org/docs/examples/restfulbooker), provided by Robot Framework. 1. **Basic**: In the basic execution we will run the sample test case by Restful Booker. In this test case, we will get bookings for a user, create a booking for a user, and verify that the application performs the required action. All these user details are hardcoded in the test case. We will execute the test in the Testkube dashboard and collect artifacts. 2. **Advanced**: We will update the above test case to use variables instead of hard coding user details and make use of parameters to provide values. This will give us the flexibility to test different scenarios. In Testkube we will make use of parameters to process these parameterized inputs. So let us see Testkube in action performing an acceptance test using Robot Framework. Before we get started, let us verify the following prerequisites. ### Pre-requisites To follow along, you will need to set up the following: - A Testkube Pro account (free plan is fine) - A Kubernetes cluster - we're using a local Minikube cluster (https://minikube.sigs.k8s.io) - The [Testkube Agent](https://docs.testkube.io/testkube-cloud/articles/installing-agent) is configured on the cluster. Once the prerequisites are in place, you should have a target Kubernetes cluster ready with a Testkube agent configured. ### Basic Execution In the basic execution, we will run the [example](https://docs.robotframework.org/docs/examples/restfulbooker) Restful Booker test code in Testkube using Test Workflows. We have pushed the test case to a [GitHub repository](https://github.com/cerebro1/robot-framework-test/tree/main). These files will by default be mounted by Testkube to the '/data/repo' directory when the test executes. Let us get started. 1. Select Test Workflows from the menu and click on the 'Add a new test workflow' button to start setting up the workflow. 2. From the options to 'Create a Test Workflow', select 'Create from scratch'. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e3ofu4uex7qh5ehq9ltj.png) 3. Enter the following details to create a new workflow. a. **Name**: Set the name of the workflow as 'basic-acceptance-test' b. **Type**: Select the type of test as 'Other' c. **Image**: Enter the name of the [Docker Container Image for Robot Framework](https://docs.robotframework.org/docs/using_rf_in_ci_systems/docker#popular-docker-images-for-robot-framework). We are using 'ppodgorsek/robot-framework' d. **Tag**: Search for the stable tag on [Docker Hub](https://hub.docker.com/r/ppodgorsek/robot-framework/tags) for the image and enter it here. We have been using [7.1.0](https://hub.docker.com/layers/ppodgorsek/robot-framework/7.1.0/images/sha256-064a9ac5e4223456e1e18133cca593359677db6afa2fd6d846a653b4733b31d7?context=explore) since it was recently released. e. **Shell command**: By default, the repository that has the [test case](https://github.com/cerebro1/robot-framework-test/blob/main/test-restful.robot) is mounted to the `/data/repo` directory. We will use the `robot` command line tool to execute the test, and using the –outputdir option, we will save the output to a specific location. Here is the shell command: 'robot --outputdir /data/repo/output /data/repo/test-restful.robot'. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mlbhzfttilnafh4um4z3.png)Create a test workflow with your custom details. 4. Select the Source as Git from the drop-down menu and enter the repository details. 5. In the final step, the UI will create the YAML for our Test Workflow. Let us go ahead and add the path where the artifacts will be stored in `spec.steps`. Here is what the final configuration looks like. ```yaml kind: TestWorkflow apiVersion: testworkflows.testkube.io/v1 metadata: name: basic-acceptance-test spec: content: git: uri: https://github.com/cerebro1/robot-framework-test.git container: image: ppodgorsek/robot-framework:7.1.0 steps: - name: Run test shell: robot --outputdir /data/repo/output /data/repo/test-restful.robot artifacts: paths: - /data/repo/output/* ``` 6. Click on Create and your 'basic-acceptance-test' workflow is ready: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kmyuvgd6zwnwnncv4c0a.png)Test Workflow created in Testkube 7. Click on 'Run this Test Workflow now' to execute the workflow. Once the Test Workflow executes successfully, you will see the Test Workflow in the 'Recent executions' list. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ntae538aqlvtksk3alp.png)Test Workflow execution is complete and successful. 8. Select the Test Workflow to view the details. You can see each step has been executed successfully in the following image. This is helpful for debugging in case something goes wrong or you need more insights. Test Workflow has a separate section for Log Output, Artifacts, and Workflow. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9ebf03znu7wtsh1anr1.png)Test Workflow Log Output shows each step execution. 9. Click on 'Run shell command' to view the details of the test cases executed. The execution shows that all the tests have passed successfully. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qxlj77ec4nn7yiuzyrif.png)Robot Framework test cases executed in Test Workflow 10. Click on the Artifacts tab to view the artifacts processed. You can download the artifacts generated by the Robot Framework here. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spx3u0hv31yms18gf63g.png)Test Workflow successfully processes the artifacts. Thus, with the help of Test Workflow, we were able to run a Robot Framework test seamlessly without any setup required on the local machine. The output shows all the tests have passed. Testkube has processed the artifacts and provided them for download. Now, let us suppose, this Robot Framework test has some parameterized values to be passed with the test. Don't worry! Testkube has got you covered. In the Test Workflows, you can pass the parameters and provide custom values. Let us understand this in detail in the coming section. ### Advanced Execution with Parameters #### Variables in Robot Framework In Robot Framework, you can define the parameters as variables, and the values can be passed at the time of execution. The default values to these parameters can be set in the Robot Framework test configuration as shown below: ``` *** Variables *** ${my_var} my_default_value *** Test Cases *** Test Case 1 Log ${my_var} ``` While using the `robot` command line utility, these variables can be passed as parameters as shown below: ``` $ robot --variable my_var:my_value test.robot ``` #### Robot Framework Variables in Test Workflows Test Workflows allows you to define configuration parameters for your workflows, which can be passed to the Robot Framework as input. Let's add parameters for the `firstname`, `lastname` and `totalprice` parameters to our workflow: ```yaml kind: TestWorkflow apiVersion: testworkflows.testkube.io/v1 metadata: name: basic-acceptance-test namespace: testkube spec: config: firstname: type: string default: Jim lastname: type: string default: Henderson totalprice: type: string default: "100" content: git: uri: https://github.com/cerebro1/robot-framework-test.git container: image: ppodgorsek/robot-framework:7.1.0 steps: - name: Run test shell: robot --variable firstname:{{ config.firstname }} --variable lastname:{{config.lastname }} --variable totalprice:{{ config.totalprice }} --outputdir /data/repo/output-{{config.firstname}}-{{config.lastname}} /data/repo/advanced-test-restful.robot artifacts: paths: - /data/repo/output-{{config.firstname}}-{{config.lastname}}/* ``` As you can see we have defined three config variables which we then pass to the robot command, and use in the naming of the output folder. When we new try to run this test we are first prompted to provide values for our variables: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k03kl7devgf59smaj1g7.png)Just pressing the run button uses the default values provided as you can see in the log output: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h7j2rnzrk0ah1awbg86m.png) ## Integrating Testkube with CI/CD Although it is useful to run tests directly from the Testkube Dashboard as shown above, you will most likely want to start these tests from your existing CI/CD pipelines. Fortunately, Testkube makes this really easy: - Navigate to the CI/CD Integration tab for your Test Workflow (see below) - Select the CI/CD tool you are using from the menu to the left - a corresponding example will be shown to the right that you can copy/paste directly into your tool ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pyjl03egwievl81wzx8k.png) ## Conclusion Using Test Workflows, we were able to easily configure the execution of our Robot Framework test, and also pass custom variables to Robot Framework in a seamless way, and automate the execution of our tests from our CI/CD system. The configuration is easy to understand and manage, which not only reduces the efforts of developers but also increases the maintainability of the configuration. For a scalable approach to test execution in a Kubernetes environment, Testkube is the solution. It takes any testing tool, provides a template, or allows you to start from scratch and automate your tests. Teams can manage the entire test life cycle in an optimized way from a single dashboard. This not only empowers them with the feasibility of scaling but also standardizes the testing process. In this tutorial, we have seen how acceptance testing with Robot Framework could be done using Test Workflows in Testkube. Similarly, teams can create multiple test workflows to manage unit, load, API, acceptance, etc., tests from the same platform. We invite you to try Test Workflows. Visit [Testkube website](https://testkube.io/) to get started. If you find yourself struggling with anything, feel free to drop a note in our active [Slack community](https://join.slack.com/t/testkubeworkspace/shared_invite/zt-2arhz5vmu-U2r3WZ69iPya5Fw0hMhRDg), and someone will help you out.
michael20003
1,875,533
Get Random Documents from Firestore
When working with Firestore, retrieving random elements poses a challenge because Firestore does not...
0
2024-06-03T15:22:05
https://dev.to/mtomto_tech/get-random-documents-from-firestore-4c65
firebase, nosql, algorithms, beginners
When working with Firestore, retrieving random elements poses a challenge because Firestore does not natively support queries for fetching documents randomly. In addition, Firestore does not support offset queries, so it is not possible to generate a random number on the client side and then fetch documents randomly by specifying an index. **Basic Solution** - Fetch all documents in the collection, shuffle the array, and then select elements either from the beginning or at random indexes. This method is quite simple. However, if there are a large number of documents, it can be quite costly and the processing speed may become slow. **Another Approach: Changing the Way to Manage Data** This is a solution that only loads the necessary records. 1. Configure a property within each document that has a number assigned randomly. 2. When fetching documents, use an `orderBy` query targeting this random number property. By combining this with a `limit` query, it is possible to fetch a specified number of documents or even the entire collection. 3. After fetching the documents, assign a new random number to the random number property of each document. This step ensures that the collection remains shuffled reproducibly. ![firestore console](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2uvd4gp4tdbflvg7vh8z.png) **Bottleneck** This method requires a significant number of write operations. Since Firestore incurs higher costs for write operations compared to read or delete operations, frequent fetching can become costly. See also: https://firebase.google.com/docs/firestore/pricing **Finally** I was pleasantly surprised when I first discovered this method. Despite Firestore not supplying direct queries for fetching random documents, this creative solution offers an excellent workaround.
mtomto_tech
1,875,624
Install NerdFont (or any fonts) using the command line in Debian (or other Linux)
I need to install the NerdFonts since I have been installing LazyVim recently. After doing some...
0
2024-06-03T15:22:04
https://dev.to/almatins/install-nerdfont-or-any-fonts-using-the-command-line-in-debian-or-other-linux-467o
nerdfont, linux, cli, lazyvim
I need to install the [NerdFonts](https://www.nerdfonts.com/) since I have been installing [LazyVim](https://www.lazyvim.org/) recently. After doing some research, I found that the convenient way to install it was using the command line. Here are the steps. 1. Go to the NerdFonts website in the [download](https://www.nerdfonts.com/font-downloads) section, then choose the font that you want to install. 2. Right-click on the download button and Copy the link address menu (depends on what browser you are using, the point is to get the font link) 3. Open your terminal 4. Copy this command then update the font link with the link that you just copied. ``` wget -P ~/.local/share/fonts https://github.com/ryanoasis/nerd-fonts/releases/download/v3.0.2/JetBrainsMono.zip \ && cd ~/.local/share/fonts \ && unzip JetBrainsMono.zip \ && rm JetBrainsMono.zip \ && fc-cache -fv ``` And that’s it. It will download and install the fonts for you. Then, you can reload your LazyVim to see the effect. I hope you found this post useful. Cheers!
almatins
1,875,568
How to use align-items/self?
A post by Nabil Senhadji (Student)
0
2024-06-03T15:19:24
https://dev.to/creativename/how-to-use-align-itemsself-p4e
beginners, css, learning, html
creativename
1,875,566
HOW TO HOST A STATIC WEBSITE IN AZURE BLOB STORAGE
Introduction A static website is a type of website that is composed of HTML, CSS, JavaScript,...
0
2024-06-03T15:16:30
https://dev.to/droz79/how-to-host-a-static-website-in-azure-blob-storage-4jc8
azure, blob, webdev, website
**Introduction** A static website is a type of website that is composed of HTML, CSS, JavaScript, images, and other static files. Unlike dynamic websites, which generate content on the server side in response to user requests, static websites deliver pre-built content to users exactly as it's stored. This means that every visitor to a static website receives the same content, regardless of their interactions with the site. Static websites are typically used for simple websites with fixed content, such as personal blogs, company brochures, portfolios, or landing pages. They are easy to set up, fast to load, and require minimal maintenance compared to dynamic websites. Hosting a static website is often more cost-effective and straightforward, making it an attractive option for many individuals and businesses looking to establish an online presence. **Prerequisites** Azure Subscription: Ensure you have an active Azure subscription. Visual Studio: Make sure you have Visual Studio Code installed on your machine. Azure Storage Account: You need to have an Azure Storage account set up. Here's a step-by-step guide: **Create an Azure Storage Account.** - Log in to the Azure portal (https://portal.azure.com). - Click on "Create a resource" in the top-left corner. - In the Azure Marketplace, search for "Storage Account" and select it. - Click the "Create" button. - Fill out the required details, such as subscription, resource group, storage account name, location, and performance. - Leave other settings at their defaults for simplicity, or adjust them according to your needs. - Click "Review + Create" and then "Create" to create the storage account. In the example below, we are creating a storage account labelled as "androsstorage" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4f0xqztwnmao2nopb1m.png) - Once the deployment is complete, navigate to your newly created storage account. **Enable Static Website Hosting** - In the left-hand menu of your storage account, under "Settings", select "Static website". - Click on "Enabled" to enable static website hosting. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5n18few16s1wkstw9jlt.png) - Set the index document name (usually "index.html") and optionally set the error document path (usually "error.html") if you have one. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t5sq5d0wuuic5rf3v3fb.png) - Click "Save" to save the configuration. - Once it's saved, primary and secondary endpoints are generated. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/owv1ipunw4cyfx28ilsn.png) - Go back to your storage account, click on containers, you can see th at a web container has been created to host your static website data. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/drb05ciu0ddebrsp0opq.png) **Create or Open a Static Website Project in Visual Studio** - Open Visual Studio. - Create a new project by selecting "File" > "New" > "Project" and choose the appropriate template for your static website (e.g., HTML/JavaScript project). - If you have an existing static website project, open it by selecting "File" > "Open" > "Project/Solution" and navigate to your project folder. In the example below, a file labelled "ceevee" has been opened. - Ensure your project contains all the necessary files for your static website (HTML, CSS, JavaScript, images, etc.). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6nzrwpbt45g49io8tc0.png) **Upload Your Website Files to Azure Blob Storage** - Open Azure Storage Explorer and sign in with your Azure account. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2217lk0yw0il2j2i2fwp.png) - Navigate to your storage account and find the Blob Container created for your static website (usually named $web). - Drag and drop your website files from your local machine to the Blob Container in Azure Storage Explorer. - Alternatively, you can right-click on the container and select "Upload" to upload files from your local machine. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/14o97obzxfmszowmss79.png) **Verify the Website URL** - Once the files are uploaded, go back to the Azure portal and navigate to your Storage Account. - In the Static website settings, you'll find the primary endpoint URL of your website. It should be something like https://<storage-account-name>.z6.web.core.windows.net. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/66rkunlkrasyrgmc9ixj.png) - Copy this URL and paste it into a web browser to verify that your website is accessible. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r4nygbl4wl8id9lk2dns.png) By following these detailed steps, you should be able to host your static website in Azure Blob Storage using Visual Studio. Good Luck!
droz79
1,875,564
How to Play a Game and Win Real Money
In this comprehensive guide, we'll explore everything you need to know about online games for earning...
0
2024-06-03T15:16:04
https://dev.to/world_explorer_dd0d0d4cc6/how-to-play-a-game-and-win-real-money-3mg5
holi
In this comprehensive guide, we'll explore everything you need to know about online games for earning money, from the different types of games available to tips on how to maximize your earnings. Online Games: In recent years, the concept of earning money through online gaming has gained significant traction. With the rise of technology and the internet, more and more people are turning to gaming as not just a hobby, but also a source of income. Whether you're a seasoned gamer or just starting, there are plenty of opportunities available to make some extra cash. In this comprehensive guide, we'll explore everything you need to know about online games for earning money, from the different types of games available to tips on how to maximize your earnings. What are Online Games for Earning Money? Online games (**[Satta King](https://satta-king-online.in/)**) for earning money, also known as play-to-earn games, allow players to earn real-world rewards or money by playing them. These games often incorporate elements of skill, strategy, and competition, allowing players to win prizes or cash based on their performance. Types of Online Games for Earning Money There is a wide variety of online games available for earning money, catering to different interests and skill levels. Some popular types include: Skill-based Games: These games require players to demonstrate skill and proficiency to win rewards. Examples include esports titles like Dota 2 and League of Legends, as well as skill-based casino games like poker and blackjack. Blockchain Games: These games utilize blockchain technology to enable players to own and trade in-game assets, such as digital currencies, items, or characters. Games like Axie Infinity and Decentraland have gained popularity in this space. Mobile Apps: With the proliferation of smartphones, many apps offer opportunities to earn money through gaming. Whether it's completing tasks, participating in surveys, or playing mini-games, mobile apps provide a convenient way to earn rewards on the go. Virtual Casinos: Online casinos offer a range of games, including slots, roulette, and blackjack, where players can wager real money to win big. Tips for Maximizing Earnings When it comes to earning money through online gaming, it's essential to choose the right game that aligns with your interests and skills. Consider factors such as the game's popularity, pay-out structure, and your level of expertise before diving in. Practice and Improve Skills In skill-based games, practice makes perfect. Take the time to hone your skills and learn advanced strategies to improve your chances of winning. Participating in online communities, watching tutorials, and studying gameplay footage can all help sharpen your abilities. Stay Informed About Trends The online gaming landscape constantly evolves, with new games and trends emerging regularly. Stay informed about the latest developments in the industry, including new releases, updates, and tournaments, to capitalize on opportunities as they arise. Manage Your Time and Finances Wisely While online gaming can be lucrative, it's essential to approach it with caution and discipline. Set aside dedicated time for gaming and establish a budget for your activities to avoid overspending or becoming too immersed in gameplay. The Evolution of Online Casino Gaming Online Satta King gaming has come a long way since its inception. Gone are the days of basic graphics and limited game selection. In 2024, players can enjoy a wide array of high-quality games featuring stunning graphics, realistic sound effects, and innovative gameplay mechanics. Whether you're a seasoned gambler or a casual player, you'll find plenty of options to suit your preferences. Diverse Selection of Games One of the biggest advantages of online casinos is the vast selection of games available at your fingertips. From traditional favorites like blackjack, roulette, and poker to modern creations like video slots and virtual reality experiences, the options are virtually endless. Players can explore different themes, gameplay mechanics, and betting options, ensuring that every gaming session is unique and exciting. Immersive Live Dealer Experiences For those craving the thrill of a real-life casino experience, many online casinos now offer live dealer games. These immersive experiences allow players to interact with professional dealers in real-time, creating an authentic atmosphere reminiscent of a land-based casino. Whether you're playing blackjack, roulette, or baccarat, you'll enjoy the excitement of watching the action unfold before your eyes. Cutting-Edge Technology Advancements in technology have played a significant role in shaping the online casino industry. From mobile gaming to virtual reality, casinos are constantly pushing the boundaries to provide players with the ultimate gaming experience. Mobile apps allow players to enjoy their favorite games on the go, while VR technology transports them to virtual worlds where anything is possible. Safety and Security Safety and security are top priorities for online casinos in 2024. With the rise of cyber threats and data breaches, casinos are investing heavily in state-of-the-art security measures to protect their players' information. Encryption technology, secure payment methods, and strict verification processes ensure that players can enjoy their favorite games with peace of mind. Responsible Gaming Practices In addition to prioritizing safety and security, online casinos are also committed to promoting responsible gaming practices. Measures such as age verification, self-exclusion options, and responsible gaming tools help ensure that players gamble responsibly and within their means. Casinos also provide resources and support for those who may be struggling with gambling addiction. Online Trends Pro Conclusion In conclusion online games for earning money offer a unique and exciting opportunity for players to turn their passion for gaming into a profitable venture. By understanding the different types of games available, implementing strategies to maximize earnings, and maintaining a balanced approach, players can unlock the full potential of this burgeoning industry. So why wait? Start exploring the world of online gaming today and discover the endless possibilities awaiting you!
world_explorer_dd0d0d4cc6
1,871,642
Data API for Amazon Aurora Serverless v2 with AWS SDK for Java - Part 7 Data API meets SnapStart
Introduction In the part 5 we measured cold and warm starts of our sample application...
26,067
2024-06-03T15:14:18
https://dev.to/aws-builders/data-api-for-amazon-aurora-serverless-v2-with-aws-sdk-for-java-part-7-data-api-meets-snapstart-22eb
aws, serverless, java, database
## Introduction In the [part 5](https://dev.to/aws-builders/data-api-for-amazon-aurora-serverless-v2-with-aws-sdk-for-java-part-5-basic-cold-and-warm-starts-measurements-4gi9) we measured cold and warm starts of our sample application which uses Data API for Amazon Aurora Serverless v2 with AWS SDK for Java and in the [part 6](https://dev.to/aws-builders/data-api-for-amazon-aurora-serverless-v2-with-aws-sdk-for-java-part-6-comparing-cold-and-warm-starts-between-data-api-and-jdbc-56hj) we did the same measurements using the standard connection management solutions like JDBC including the usage of the Amazon RDS Proxy service and compared the result. In this part of the series we'll enable AWS SnapStart to the Lambda function communicating with Aurora Serverless v2 PostgreSQL via Data API and also apply optimization technique called priming. ## Measuring cold and warm starts with SnapStart enabled and priming I released a separate series about [AWS Lambda SnapStart](https://dev.to/vkazulkin/measuring-java-11-lambda-cold-starts-with-snapstart-part-1-first-impressions-30a4) where I talked about its benefits and also measured warm and cold starts for the similar [application ](https://dev.to/aws-builders/measuring-lambda-cold-starts-with-aws-snapstart-part-9-measuring-with-java-21-5e52) which also used Java 21 managed Lambda runtime but DynamoDB database instead of Aurora Serverless v2 with Data API. In our experiment we'll re-use the application introduced in the [part 1](https://dev.to/aws-builders/data-api-for-amazon-aurora-serverless-v2-with-aws-sdk-for-java-part-1-introduction-and-set-up-of-the-sample-application-3g71) for this which you can find [here](https://github.com/Vadym79/AWSLambdaJavaAuroraServerlessV2DataApi). We will measure cold and warm start for 2 approaches: - Enable SnapStart for Lambda function [GetProductByIdViaAuroraServerlessV2DataApi ](https://github.com/Vadym79/AWSLambdaJavaAuroraServerlessV2DataApi/blob/master/template.yaml) by adding ``` SnapStart: ApplyOn: PublishedVersions ``` to the Properties: section of the Lambda function - Additionally apply priming technique to the Lambda function with SnapStart enabled by priming the database request. I explained priming in my article [Measuring priming, end to end latency and deployment time with Java ](https://dev.to/aws-builders/measuring-java-11-lambda-cold-starts-with-snapstart-part-5-priming-end-to-end-latency-and-deployment-time-jem). In our case I implemented additional Lambda function [GetProductByIdViaAuroraServerlessV2DataApiWithPriming](https://github.com/Vadym79/AWSLambdaJavaAuroraServerlessV2DataApi/blob/master/template.yaml). In its implementation [GetProductByIdViaAuroraServerlessV2DataApiWithPrimingHandler](https://github.com/Vadym79/AWSLambdaJavaAuroraServerlessV2DataApi/blob/master/src/main/java/software/amazonaws/example/product/handler/GetProductByIdViaAuroraServerlessV2DataApiWithPrimingHandler.java) you can see how priming works in action : ``` public void beforeCheckpoint(org.crac.Context<? extends Resource> context) throws Exception {             auroraServerlessV2DataApiDao.getProductById("0");       } ``` In the **beforeCheckpoint** Lambda runtime hook method (which uses [CRaC API](https://openjdk.org/projects/crac/)) we prime database invocation by retrieving the product with id equals to 0 from the database using Data API for Aurora Serverless v2. With that we pre-initialize all the classes involved in the invocation chain and pre-initilizing synchronous HTTP client (we use the default one which is Apache) which all will be directly available after the Firecracker microVM is restored. For that our Lambda handler needs to implement **org.crac.Resource** interface, and register this class to be CRaC-aware with: ``` public GetProductByIdViaAuroraServerlessV2DataApiWithPrimingHandler() {             Core.getGlobalContext().register(this);       } ``` Additionally we need to include the following dependecy ``` <dependency> <groupId>io.github.crac</groupId> <artifactId>org-crac</artifactId> <version>0.1.3</version> </dependency> ``` in the [pom.xml](https://github.com/Vadym79/AWSLambdaJavaAuroraServerlessV2DataApi/blob/master/pom.xml). The results of the experiments to retrieve the existing product from the database by its id see below were based on reproducing more than 100 cold and at least 30.000 warm starts with experiment which ran for approximately 1 hour. For it (and experiments from my previous article) I used the load test tool [hey](https://github.com/rakyll/hey), but you can use whatever tool you want, like [Serverless-artillery](https://www.npmjs.com/package/serverless-artillery) or [Postman](https://www.postman.com/). **Cold (c) and warm (m) start time in ms:** |Approach|c p50|c p75|c p90|c p99|c p99.9|c max|w p50|w p75 |w p90|w p99|w p99.9|w max| |---------------|----------|-----------|----------|----------|----------|----------|-----------|----------|----------|----------|----------|----------| |No SnapStart enabled|3154.35|3237|3284.91|3581.49|3702.12|3764.92|104.68|173.96|271.32|572.11|1482.89|2179.7| |SnapStart enabled without priming|1856.11|1994.61|2467.83|3229.11|3238.80|3241.75|61.02|113.32|185.37|639.35|1973.30|2878.5| |SnapStart enabled with priming of database invocation via Data API|990.84|1069.04|1634.84|2120.00|2285.03|2286.9|60.06|106.35|185.37|581.27|1605.37|2658.24| ## Conclusion In this part of the series, we applied SnapStart to the Lambda function and also convinced that it significantly reduced the cold and the warm start of our Lambda function. It was especially true when we also applied priming of Data API invocation. In the next part of the series we'll introduce various optimization strategies for the cold and warm starts.
vkazulkin
1,875,563
Essential Insights from Rapyd’s Latest Report Every Payment Industry Developer Must Know
Key Points for Developers: Payment Delays: 73% of businesses in high-opportunity...
0
2024-06-03T15:13:31
https://community.rapyd.net/t/73-of-businesses-struggle-with-payment-delays-according-to-rapyd-s-2024-state-of-payments-for-high-opportunity-industries/59349
ai, payments, fintech, rapydreports
![Key takeaways from Rapyd develoeprs need to know](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xyynd26oepwad53gilbv.jpg) ## Key Points for Developers: - **Payment Delays:** 73% of businesses in high-opportunity industries face delays of 2-15 days, which impacts cash flow. - **Priority on Speed & Convenience:** 38% of businesses prioritize fast and easy transfers. This is crucial for maintaining smooth operations. - **Real-Time Payments (RTP):** There’s a growing shift towards RTP for faster settlements and improved cash flow. Consider integrating RTP solutions to enhance payment processes. - **AI in Fraud Prevention:** AI is increasingly used to combat sophisticated fraud tactics. Implementing AI can help ensure secure and efficient transactions.
uxdrew
1,873,289
Using Supabase to Store Images in a .NET Application
Table of Contents Introduction Set Up Supabase Storage Create a .NET Project Using Visual...
0
2024-06-03T15:12:27
https://dev.to/reliable25/using-supabase-to-store-images-in-a-net-application-2o8g
webdev, api, dotnet, backend
## Table of Contents - [Introduction] (#intro) - [Set Up Supabase Storage] (#set-up) - [Create a .NET Project Using Visual Studio] (#create) - [Configure Supabase Client] (#configure) - [Conclusion] (#conclusion) <a id= "intro"></a> ## Introduction Supabase storage offers a powerful solution for efficiently managing file storage in web applications. In this guide, we'll explore how to leverage Supabase storage within a .NET application to handle image uploads seamlessly. By using Supabase storage, we can avoid the pitfalls of storing large files directly within the database, which can adversely affect database performance and scalability. Instead, Supabase converts uploaded files into accessible URLs, which are then stored in the database. This approach not only helps maintain a lean and efficient database but also simplifies the process of accessing and integrating files into your application. In this tutorial, we'll walk through the steps to set up a .NET API that allows users to upload images. These images will be stored securely in Supabase storage, and their URLs will be gotten which can later be stored in the database for easy retrieval and integration into your application. Let's dive in and discover how Supabase storage can streamline image management in your .NET projects. <a id= "set-up"></a> ## Set Up Supabase Storage **Step 1:** Sign up at [supabase.com](https://supabase.com/) and create a new project. **Project Details** - **Name:** Enter a name for your project. - **Database Password:** Create a strong password for your database. This password will be used to connect to your database, so ensure it is secure. - **Region:** Choose a region for your database server. If you are in Nigeria, the closest region is typically Europe (London). Selecting a closer region can help minimize latency for users in Nigeria. Click on the `Create new project` button to finalize the creation of your new Supabase project. ![create supabase project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxz5vzf33vxp5e1t3tio.PNG) **Step 2:** Create a Bucket in the new project - Click on the `Storage` tab on the left-hand menu. - Click the `New Bucket` button. - Enter a name for your `new bucket`, e.g., photos. - Toggle the switch to make the bucket public. This setting allows anyone to access the files in this bucket without needing authentication. - Click the Save button to save your new bucket. ![Add New bucket](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88kjhte7ir3nzexvws3t.PNG) <a id= "create"></a> ## Create a .NET Project Using Visual Studio - Launch Visual Studio. - Click on `Create a new project` . - Search for `ASP.NET Core Web API` and select it. - Enter a name for your project (e.g., SupabaseImageUpload). - Choose a location to save your project. - select the target framework (e.g., .NET 6.0 or .NET 7.0 or .NET 8.0). - Click `Create` . <a id= "configure"></a> ## Configure Supabase Client **Step 1:** Install the necessary NuGet packages: - Open your project in Visual Studio. - Right-click on your project in the Solution Explorer. - Select `Manage NuGet Packages...` - Search for `Supabase` and select the `Supabase` package from the list. - Click the `Install` button and ensure that the `Supabase` package is listed among the installed packages. ![Install supabase](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6gvnlwj85npteedpjwt.PNG) **Step 2:** Configure Supabase in appsettings.json ```json "Supabase": { "Url": "https://your-supabase-url.supabase.co", "ApiKey": "your-supabase-api-key" } ``` - Click on the `Settings` tab on the left-hand menu and then click on `API` - Copy the URL provided under the API settings and paste in the `Url` in the `appsettings.json` - Click on the `service_role` secret to reveal it, Copy the API key (secret) provided and paste in the `Apikey` in the `appsettings.json` ![appsettings](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0rfykd9t54itsxbva49.PNG) **Step 3:** Program.cs Configuration ```csharp builder.Services.AddScoped<Supabase.Client>(_ => new Supabase.Client( builder.Configuration["Supabase:Url"], builder.Configuration["Supabase:ApiKey"], new SupabaseOptions { AutoRefreshToken = true, AutoConnectRealtime = true, })); ``` **Step 4:** Create a model for the request. Add `CreateImageRequest.cs` ```csharp public class CreateImageRequest { public string Name { get; set; } public IFormFile Image { get; set; } } ``` **Step 5:** Create a controller ImageController.cs: ```csharp [Route("api/[controller]")] [ApiController] public class ImageController : Controller { [HttpPost] public async Task<IActionResult> UploadImage([FromForm] CreateImageRequest request, [FromServices] Supabase.Client client) { using var memoryStream = new MemoryStream(); await request.Image.CopyToAsync(memoryStream); var imageBytes = memoryStream.ToArray(); var bucket = client.Storage.From("photos"); var fileName = $"{Guid.NewGuid()}_{request.Image.FileName}"; await bucket.Upload(imageBytes, fileName); var publicUrl = bucket.GetPublicUrl(fileName); return Ok(new { Url = publicUrl }); } } ``` **Step 6:** Test and Run your .NET application - Use a tool like Postman or Swagger to test the image upload endpoint: - URL: `https://your-localhost/api/Image` - Method: `POST` - Form Data: Name: Any string value. Image: Choose an image file to upload. ![End result](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cv0mwq4txbvgnq4il676.PNG) ![final result](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pkygnr1gy4a6xn2aheo2.PNG) <a id= "conclusion"></a> ## Conclusion In this article, you've successfully set up a .NET API that can upload images to Supabase storage and retrieve their public URLs. This can be extended and integrated into various applications where you need to store and manage images or other files. Supabase provides a robust backend solution with easy integration for your .NET applications. Here is the [GitHub repository](https://github.com/Reliable25/SupabaseImageUpload) for this article in case you need to check it out. Lastly, if you have found value in this article, please consider sharing it with your peers who may also benefit from it. What are your thoughts on the topic "Using Supabase to Store Images in a .NET Application"? Feel free to share your thoughts in the comments section below. Happy coding!
reliable25
1,875,562
Deploy Go Application using Docker Compose Replicas and Nginx
Deploying the Go application using docker and docker-compose with Nginx load balancer can be achieved...
0
2024-06-03T15:12:19
https://dev.to/almatins/deploy-go-application-using-docker-compose-with-nginx-load-balancer-f1h
docker, go, nginx, deploy
Deploying the Go application using docker and docker-compose with Nginx load balancer can be achieved using the below strategy. This is not the only one on how to do the task, but hopefully, you can find this useful. ## Project Structure My Go application project structure looks like this: ``` /go-app |-- cmd |-- internal <-- the app source code |-- nginx |-- nginx.conf |-- config.json <-- my config file for Viper |-- docker-compose.yaml |-- Dockerfile |-- go.mod |-- go.sum |-- main.go ``` ## Nginx Configuration ``` user nginx; # can handle 1000 concurrent connections events { worker_connections 1000; } # forwards http requests http { # http server server { # listens the requests coming on port 8080 listen 80; access_log off; proxy_request_buffering off; proxy_buffering off; # / means all the requests have to be forwarded to api service location / { # resolves the IP of api using Docker internal DNS proxy_pass http://rest-api:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } } ``` ## Dockerfile ```yaml FROM golang:1.22-alpine AS builder # working directory (/build). WORKDIR /build # dependency using go mod. COPY go.mod go.sum ./ RUN go mod download # Copy the code COPY . . # environment variables for docker image # and build the server. ENV CGO_ENABLED=0 GOOS=linux GOARCH=amd64 RUN apk add --no-cache dumb-init RUN go build -ldflags="-s -w" -o apiserver ./main.go FROM alpine:latest # working directory (/build). WORKDIR / # Copy the Pre-built binary file from the previous stage. COPY --from=builder ["/usr/bin/dumb-init", "/usr/bin/dumb-init"] COPY --from=builder ["/build/apiserver", "/"] COPY --from=builder ["/build/config.json", "/config.json"] # Export necessary port. EXPOSE 3000 ENTRYPOINT ["/usr/bin/dumb-init", "--"] CMD ["/apiserver"] ``` ## Docker Compose ```yaml services: # service name rest-api: # Dockerfile location build: "." # Exposes the port 3000 for internal ports: - "3000" # always restart when the service went down restart: always # number of replicas deploy: replicas: 2 # nginx load balancer nginx: # latest stable alpine nginx image image: nginx:stable-alpine # nginx configuration volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro # start nginx after the service up successfully depends_on: - rest-api # map the nginx port 80 to docker port 3000 ports: - "3000:80" ``` That’s it, now we can test the docker using `docker compose build` then continue with `docker compose up -d` . Hope this can help someone out there. happy coding!
almatins
1,875,561
VS Code shortcuts
% bash prompt Ctrl + C cancel any currently running terminal pry or rails console for interacting...
0
2024-06-03T15:11:46
https://dev.to/aizatibraimova/vs-code-shortcuts-257f
**%** bash prompt **Ctrl + C** cancel any currently running terminal **pry or rails console** for interacting with a database **q** to quit **cmd + K** clear terminal **cmd + Shift + P** Command Palette(fuzzy search) **cmd + P** quick open file **Option + Cmd + J** hard refresh in Chrome **ctrl + A/Cmd + A** Jump back to the beginning of the line **Ctrl + E** ed of the line **Cmd + F** find(and replace) **Cmd + Shift + F** find (and replace) Global **Cmd + D** Find next selection **Shift + Option + Down** Duplicate lines
aizatibraimova
1,875,559
Seamless Local Development with MFA-Enabled Roles on AWS
This article is a translation of https://zenn.dev/tilfin/articles/ab53ae77a8378f. The...
0
2024-06-03T15:10:23
https://dev.to/tilfin/seamless-local-development-with-mfa-enabled-roles-on-aws-89i
aws, localdev
--- title: "Seamless Local Development with MFA-Enabled Roles on AWS" published: true tags: - AWS - localdev --- This article is a translation of https://zenn.dev/tilfin/articles/ab53ae77a8378f. ## The Challenge Many of us have an AWS account for our main login portal and separate AWS accounts for each project. We often need to assume roles to obtain temporary credentials and access resources like S3 and DynamoDB on AWS from our local development environment. The challenge lies in the short-lived nature of these credentials. In a development environment where the server automatically reloads upon code changes, manually renewing credentials and restarting the server every hour becomes a tedious task. This becomes even more cumbersome with MFA (Multi-Factor Authentication), requiring manual one-time password entry. This article outlines the steps I took to streamline this process and enable seamless development without interruptions. ## Temporary Credential Issuance While `aws sts assume-role` is the standard AWS CLI command for this purpose, I've developed a tool called `swrole` to simplify this process. https://github.com/tilfin/homebrew-aws/tree/master?tab=readme-ov-file#swrole ## swrole 1.1 Release This latest update introduces multiple ways to handle one-time passwords (OTP). While version 1.0 only supported interactive input, version 1.1 allows you to pass the OTP directly as a command argument using `-t 123456`. Additionally, you can now define `generate_token` within the target profile section of your `~/.aws/config` file. This enables automatic OTP retrieval from services like 1Password. ### ~/.aws/credentials ```ini [my-company] aws_access_key_id = XXXXXXXXXX aws_secret_access_key = XXXXXXXXXX ``` ### ~/.aws/config ```ini [profile my-company] region = ap-northeast-1 [profile project-dev] role_arn = arn:aws:iam::111111111111:role/developer mfa_serial = arn:aws:iam::000000000000:mfa/member source_profile = my-company region = ap-northeast-1 generate_token = op item get "AWS Account" --otp ``` In this example, **generate_token** is configured to fetch the OTP for "AWS Account" using the [1Password CLI](https://developer.1password.com/docs/cli/). ## Leveraging Process Credential Provider in AWS SDKs During my research, I discovered the `credential_process` configuration option in `~/.aws/config`. This allows AWS SDKs to execute a specified command and utilize the resulting JSON output (from standard output) as credentials. https://docs.aws.amazon.com/sdkref/latest/guide/feature-process-credentials.html Recognizing its potential, I've incorporated this functionality into **swrole** version 1.1. Using the `-j` option now outputs the credential JSON in the format expected by the Process credential provider. However, directly using **credential_process** with the `project-dev` profile won't work seamlessly due to the SDK's credential prioritization. We need to define a separate profile specifically for application development. ### ~/.aws/config ```ini [profile app-dev] credential_process = swrole -j project-dev ``` ### ~/.aws/credentials ```ini [app-dev] region = ap-northeast-1 ``` While other SDKs are untested, using `~/.aws/credentials` for `credential_process` works with AWS SDK JavaScript v3. Now, you can run your development server with this AWS profile, triggering the seamless credential fetching process: ``` $ AWS_PROFILE=app-dev pnpm dev ``` **Important:** Ensure your AWS SDK utilizes proper credential caching to avoid issues with concurrent requests using the same one-time password. ## Additional Notes I encountered some challenges with AWS SDK JavaScript v3's credential caching behavior related to this approach. You can find a detailed explanation in this article: [Optimizing Credential Configuration in AWS SDK for JavaScript v3: Understanding Cache Mechanisms and Best Practices](https://dev.to/tilfin/optimizing-credential-configuration-in-aws-sdk-for-javascript-v3-understanding-cache-mechanisms-and-best-practices-oka) Furthermore, this method eliminates the need for manual OTP input even with the AWS CLI itself.
tilfin
1,875,558
Introduction to Web Development with Go (net/http package)
Introduction to Web Development with Go (net/http package) Go is a powerful programming...
0
2024-06-03T15:10:05
https://dev.to/romulogatto/introduction-to-web-development-with-go-nethttp-package-9al
# Introduction to Web Development with Go (net/http package) Go is a powerful programming language that has gained popularity in recent years, especially for web development. With its simplicity and efficiency, it provides developers with the tools they need to build high-performance websites and web applications. In this article, we will explore the fundamentals of web development using Go's `net/http` package. This built-in package provides a robust set of functionalities to create HTTP servers and handle HTTP requests/responses efficiently. ## Setting up your Development Environment Before diving into web development with Go, you'll need to set up your development environment. Follow these steps: 1. Install Go: Head over to the [official documentation](https://golang.org/dl/) and download the appropriate binary distribution for your operating system. 2. Verify Installation: Open a terminal or command prompt window and run the following command: ```shell go version ``` If you see an output similar to `go version go1.x.x`, congratulations! You have successfully installed Go. 3. Set up Workspace: Create a workspace directory where you will store all your Go code projects. This directory should be outside of the standard system paths like `/usr/bin` or `/opt`. 4. Configure Environment Variables: Add your workspace's bin folder path (`$WORKSPACE_DIR/bin`) to the `PATH` environment variable so that you can invoke your executables easily from any location. 5. IDE/Editor Setup: Choose an IDE or text editor that supports Go syntax highlighting and offers useful extensions or plugins for code completion, formatting, and debugging purposes. Some popular choices include Visual Studio Code (with Go extension), IntelliJ IDEA (with Golang Plugin), or Sublime Text (with golangconfig). With your development environment now set up correctly let's move on to exploring web development in Go! ## Creating Your First Web Server To get started with Go web development, let's create a basic HTTP server that listens for incoming requests and responds with a simple "Hello, World!" message. Create a new file called `main.go` in your workspace directory and add the following code: ```go package main import ( "fmt" "net/http" ) func helloWorldHandler(w http.ResponseWriter, r *http.Request) { fmt.Fprint(w, "Hello, World!") } func main() { http.HandleFunc("/", helloWorldHandler) http.ListenAndServe(":8080", nil) } ``` Let's go through what this code does step-by-step: 1. The `helloWorldHandler` function handles incoming HTTP requests. It takes two parameters: `w`, which is an interface to write the response to the client, and `r`, which represents the HTTP request received. 2. Inside the `helloWorldHandler` function, we use the `fmt.Fprint` function to write "Hello, World!" as our response directly into the response writer (`w`) passed as an argument. 3. In the `main` function, we register our handler (`helloWorldHandler`) with Go's default multiplexer (`http.HandleFunc`). This tells Go to route all incoming requests to this specific handler. 4. Finally, we start our server by calling `http.ListenAndServe`. It takes two parameters: a port number (in this case 8080), and an optional parameter for handling custom routers (set as nil for now). To run your web server locally, open a terminal or command prompt window in your workspace directory and execute this command: ```shell go run main.go ``` Visit [http://localhost:8080](http://localhost:8080) in your web browser. You should see "Hello, World!" displayed on the page. Congratulations! You've just created your first web server using Go's net/http package! ## Conclusion In this article, we've covered the basics of web development with Go's `net/http` package. We started by setting up our development environment and creating a simple "Hello, World!" server. Go's `net/http` package provides extensive capabilities for building robust web applications. With its powerful features like routing, middleware support, and concurrent handling of requests, you can build scalable and efficient web services. By continuing to explore the various functions and methods available in the `net/http` package documentation, you'll be able to develop complex web applications with ease using Go. Happy coding!
romulogatto
1,875,557
Deloy SocketIO Server Using Docker and Nginx Load Balancer (+SSL)
In my last project, I needed to have socket servers that could manage connections for more than 10...
0
2024-06-03T15:05:00
https://dev.to/almatins/deloy-socketio-server-using-docker-and-nginx-load-balancer-ssl-2coa
docker, nginx, socketio, ssl
In my last project, I needed to have socket servers that could manage connections for more than 10 thousands of mobile applications. I also needed to make the connection using a secure connection if possible. After digging into some scenarios, I finally can manage the socket server deployment using Docker and Nginx in a single Virtual Private Server. Here are the steps that I did: ## Project Structure This is how I organize the socket server project ``` /socket-server |-- src/ |-- index.js |-- loggers/ |-- .env |-- .env_one |-- .env_two |-- .env_three |-- .env_four |-- Dockerfile |-- docker-compose.yaml |-- nginx.conf |-- package.json |-- yarn.lock ``` ## The index.js ```js import express from "express"; import { createServer } from "http"; import { Redis } from "ioredis"; import { Server } from "socket.io"; import { createAdapter } from "@socket.io/redis-adapter"; import logger from "./logger/winston_logger.js"; import bodyParser from "body-parser"; const app = express(); const http = createServer(app); const port = process.env.PORT; const serverName = process.env.SERVER_NAME; const redisHost = process.env.REDIS_HOST; const redisPort = process.env.REDIS_PORT; const socketKey = process.env.SECRET_KEY; let numConnectedSockets = 0; const pubClient = new Redis({ host: redisHost, port: redisPort, keyPrefix: "myapp", }); const subClient = pubClient.duplicate(); logger.info( `Redis client connected to ${redisHost}:${redisPort} with status: pub: ${pubClient.status} sub: ${subClient.status}` ); // listen to redis connection status pubClient.on("connect", () => { logger.info("Redis pub client connected!"); }); subClient.on("connect", () => { logger.info("Redis sub client connected!"); }); // create socket io server with adapter // add your other services here const io = new Server(http, { cors: { origin: ["http://localhost:3000", "http://localhost:8020"], methods: ["GET", "POST"], }, adapter: createAdapter(pubClient, subClient), }); // express listen to port http.listen(port, () => { logger.info(`Server ${serverName} is running on port ${port}`); }); // socket io connection io.on("connection", (socket) => { logger.info(`User connected: ${socket.id} to socket server ${serverName}`); socket.emit("hi", { serverName: serverName, msg: `Hello from socket server ${serverName}!`, }); // socket io events socket.on("disconnect", () => { numConnectedSockets--; logger.info(`User disconnected: ${socket.id} from ${serverName}!`); }); socket.on("hello", (msg) => { logger.info(`Receive a hello from ${socket.id} with ${msg}`); logger.info(`server socket key is ${socketKey}`); logger.info(`test socket key vs msg is ${socketKey} vs ${msg}`); if (msg) { if (msg === socketKey) { logger.info(`User ${socket.id} sent the correct key. Welcome!`); socket.emit("introduce_yourself", { serverName: serverName, msg: `Please introduce yourself!`, }); } else { socket.disconnect(); } } else { socket.disconnect(); } }); socket.on("introduction", (msg) => { logger.info(`Receive a introduction from ${socket.id}`); if (msg) { const { name, token } = msg; if (!name || !token) { logger.error( `User ${socket.id} did not introduce themselves correctly!. Disconnecting...` ); // disconnect user socket.disconnect(); return; } logger.info( `User ${socket.id} introduced as ${name} with token ${token}` ); // validate name and token let isAuthorized = false; if (name === "admin" && token === "admin") { isAuthorized = true; logger.info(`User ${socket.id} is an admin!`); } if (token === socketKey) { logger.info(`User ${socket.id} is an authenticated user!`); isAuthorized = true; } if (isAuthorized === false) { logger.error( `User ${name} with socket id ${socket.id} is not authorized. Disconnecting...` ); // disconnect user socket.disconnect(); return; } numConnectedSockets++; } else { logger.error( `Can not get the message from ${socket.id} while introduction. Disconnecting...` ); logger.error({ msg, }); // disconnect user socket.disconnect(); } }); socket.on("log_event", (data) => { socket.broadcast.emit("broadcast_log_event", { serverName: serverName, msg: data, }); }); }); // error handling io.on("error", (error) => { logger.error(`Error: ${error}`); }); app.use(bodyParser.json()); // express default route app.get("/heartbeat", (req, res) => { res.sendStatus(200); }); // base url app.get("/", (_, res) => { winstonLogger.info(`base endpoint called`); res.status(200).json({ error: false, msg: "Base endpoint works", }); }); app.get("/", (req, res) => { res.status(200).json({ message: `Welcome to socket server ${serverName}`, socketConnected: numConnectedSockets, }); }); ``` As you may noticed here, we are using the Redis adapter to keep the socket servers synced when we have incoming and upcoming events emitted. You can go to the official documentation here for the Redis adapter. ## Nginx Configuration The nginx configuration goes like this ``` # Reference: https://www.nginx.com/resources/wiki/start/topics/examples/full/ worker_processes 4; events { worker_connections 1024; } http { server { listen 80; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://nodes; # enable WebSockets proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } upstream nodes { # enable sticky session with either "hash" (uses the complete IP address) hash $remote_addr consistent; # or "ip_hash" (uses the first three octets of the client IPv4 address, or the entire IPv6 address) # ip_hash; # or "sticky" (needs commercial subscription) # sticky cookie srv_id expires=1h domain=.example.com path=/; server server-one:3000; server server-two:3000; server server-three:3000; server server-four:3000; } } ``` ## Dockerfile ```yaml # Use the official Node.js 21 image as the base image FROM node:21-alpine # Set the working directory inside the container WORKDIR /app # Copy package.json and package-lock.json to the working directory COPY package*.json ./ # Install the app dependencies RUN yarn install --production=true # Copy the rest of the app source code to the working directory COPY . . # Set the non-root user to run the application USER node # Expose the port on which the app will run EXPOSE 3000 # Start the application CMD node --env-file=.env src/index.js ``` ## Docker Compose ```yaml services: nginx: image: nginx:alpine volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro links: - server-one - server-two - server-three - server-four ports: - "3002:80" server-one: build: . expose: - "3000" env_file: - .env_one restart: always server-two: build: . expose: - "3000" env_file: - .env_two restart: always server-three: build: . expose: - "3000" env_file: - .env_three restart: always server-four: build: . expose: - "3000" env_file: - .env_four restart: always ``` Nice, then we can start deploying the server using the command `docker compose` build and following with the command `docker compose up -d` . ## Host Machine Nginx + Letsencrypt To be able to use the `wss://mydomain` for the socket URL when the socket client wants to connect to the server, we can set up the SSL using Letsencrypt in the host machine. I will write about this in a separate post. After completing the SSL setup, then we can add this block of configuration in the nginx configuration inside the `server {` block. Usually at `/etc/nginx/sites-available/default` file. ``` location /socket.io/ { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; # the port of the nginx in docker compose proxy_pass http://localhost:3002; # enable WebSockets proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } ``` That’s it, then your socket client will able to connect using the `wss://mydomain` socket server URL. If you are using Flutter. There is a socket io client plugin that you can use. Check it out here. Below is the code to connect to the socket server using SSL enabled. ```dart import 'package:socket_io_client/socket_io_client.dart' as io; // create a new socket instance socket = io.io(socketServer, { 'transports': ['websocket'], 'secure': true, }); ocket.onConnect((_) { debugPrint('get connected to the server $socketServer'); }); socket.onError((error) { debugPrint('error: $error'); }); socket.onDisconnect((_) => debugPrint('disconnected from $socketServer')); ``` I hope you found this post useful. Happy coding!
almatins
1,875,556
Optimizing Credential Configuration in AWS SDK for JavaScript v3: Understanding Cache Mechanisms and Best Practices
This article is a translation of https://zenn.dev/tilfin/articles/56f8dc56b83901. ...
0
2024-06-03T15:04:49
https://dev.to/tilfin/optimizing-credential-configuration-in-aws-sdk-for-javascript-v3-understanding-cache-mechanisms-and-best-practices-oka
node, javascript, aws, awssdk
--- title: Optimizing Credential Configuration in AWS SDK for JavaScript v3: Understanding Cache Mechanisms and Best Practices published: true tags: - NodeJS - JavaScript - AWS - AWSSDK --- This article is a translation of https://zenn.dev/tilfin/articles/56f8dc56b83901. ## Background The AWS SDK for JavaScript v3 introduced a significant shift in client configuration compared to v2. While v2 allowed for global SDK settings, v3 adopts a per-client instance runtime configuration approach. Consequently, credential retrieval, which was internally cached in v2 even with randomly generated clients for each AWS resource, now presents potential challenges in v3 with frequent credential requests. ## Delving into Credential Retrieval Logic The simplest approach involves directly setting AWS access and secret keys as `credentials` in the constructor arguments of AWS resource clients. However, this method is not ideal for real-world application development. Applications deployed on EC2 or ECS typically retrieve credentials from instance or container metadata services, while Lambda functions often obtain them via environment variables. The AWS SDK v3 implements these credential supply logics as **Credential Providers**. You can also specify a **Credential Provider** for the client's `credentials`. Details are described on the following page: https://github.com/aws/aws-sdk-js-v3/tree/main/packages/credential-providers If nothing is specified for `credentials`, the function set in `credentialDefaultProvider` is executed first to generate a **Credential Provider**, which is then used as `credentials`. The default `credentialDefaultProvider` is `defaultProvider`. It attempts various acquisition methods. By using this method, you can automatically refer to `~/.aws/credentials` in a local environment and metadata in a deployed environment. To explicitly specify the implementation of `defaultProvider`, use `fromNodeProviderChain`. ## Credential Caching Mechanism Credential retrieval occurs each time a client invokes a command. However, **Credential Providers, which implement the various retrieval logics provided by the AWS SDK, do not have a caching mechanism. The only exception is the `memoize` function of `@smithy/property-provider` used internally by `defaultProvider`.** Furthermore, **the re-acquisition logic for time-limited credentials, including session tokens, is also implemented only here.** https://github.com/aws/aws-sdk-js-v3/blob/main/packages/credential-provider-node/src/defaultProvider.ts#L59 https://github.com/smithy-lang/smithy-typescript/blob/main/packages/property-provider/src/memoize.ts#L46 ## What are the Best Practices? Based on the insights so far, in most cases where you are implementing an application that operates within the same AWS account, "using `defaultProvider`", or "specifying nothing", is the best approach. However, there is a potential pitfall here. As mentioned at the beginning, in v3, since each client has a different **Credential Provider**, caching is also done on a per-client basis. For example, in a batch processing loop where the code "creates a client and retrieves an item", there is a possibility that credential acquisition will fail. Below is an issue where an error occurred because the access limit of EC2's metadata endpoint was reached. https://github.com/aws/aws-sdk-js-v3/issues/4867 As a workaround, the issue's comments suggest caching (reusing) the instances of each AWS client themselves. However, in applications that rely on many AWS services and where command transmissions to them occur intensively, this may not be avoidable. There is also an improvement proposal for this. https://github.com/aws/aws-sdk-js-v3/issues/4612 ## Conclusion Currently, the best practice is to reuse `defaultProvider`. Since `fromNodeProviderChain` is provided by the SDK, generate it at the module level as shown below and reuse it. ``` import { fromNodeProviderChain } from "@aws-sdk/credential-providers"; export const credentialProvider = fromNodeProviderChain(); ``` ``` new S3Client({ credentials: credentialProvider }); new DynamoDBClient({ credentials: credentialProvider }); ``` This will prevent the retrieval process from being concentrated internally. ### References https://zenn.dev/luma/articles/bd3c59b3d7682d
tilfin
1,875,747
Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio
Participe do Bootcamp Coding The Future GFT, um evento de desenvolvimento em Java com um foco...
0
2024-06-23T13:51:32
https://guiadeti.com.br/bootcamp-desenvolvimento-java-ia-gratuito/
bootcamps, cursosgratuitos, inteligenciaartifici, java
--- title: Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio published: true date: 2024-06-03 15:01:55 UTC tags: Bootcamps,cursosgratuitos,inteligenciaartifici,java canonical_url: https://guiadeti.com.br/bootcamp-desenvolvimento-java-ia-gratuito/ --- Participe do Bootcamp Coding The Future GFT, um evento de desenvolvimento em Java com um foco especial em Inteligência Artificial, oferecido pela Digital Innovation One (DIO) em parceria com a multinacional GFT. Tendo mais de 5.000 bolsas disponíveis, este bootcamp é uma oportunidade para se tornar um desenvolvedor back-end nessa linguagem, começando pelos fundamentos essenciais e avançando até tópicos mais complexos como modelagem de bancos de dados e implementação de APIs com Spring Boot na nuvem. Possuindo uma carga horária de 60 horas, o programa oferece conteúdo educativo de alto nível e projetos práticos para enriquecer seu portfólio e desafios de código que colocarão à prova suas novas habilidades. ## Coding The Future GFT –Desenvolvimento Java com IA Participe do Bootcamp Coding The Future GFT, uma iniciativa exclusiva oferecida pela Digital Innovation One (DIO) em colaboração com a multinacional GFT, feito para transformar você em um desenvolvedor back-end especializado em Java e Inteligência Artificial. ![](https://guiadeti.com.br/wp-content/uploads/2024/06/image.png) _Imagem da página do bootcamp_ Possuindo mais de 5.000 bolsas disponíveis, este programa intensivo promete equipá-lo com conhecimentos desde fundamentos básicos até habilidades avançadas em tecnologias de ponta. As inscrições estão abertas até 30/06. ### Quem Deve Participar? Este bootcamp é ideal para profissionais que desejam aprimorar os conhecimentos adquiridos na faculdade com experiência prática, assim como para aqueles que já atuam na área de back-end e buscam evoluir suas carreiras com projetos que destacam seus portfólios. É também uma excelente oportunidade para quem busca preparação real para as exigências do mercado de trabalho e quer ter sucesso nas entrevistas de emprego. ### Estrutura do Curso e Conteúdo O bootcamp inclui 60 horas de conteúdo rico e detalhado, começando com os fundamentos essenciais de programação em Java e progredindo até tópicos complexos como modelagem de bancos de dados e desenvolvimento de APIs utilizando Spring Boot na nuvem. Essa progressão cuidadosamente planejada garante uma aprendizagem sólida e aplicável no mundo real. ### Conteúdos Do Bootcamp O bootcamp inclui projetos práticos que permitem aos participantes aplicar o que aprenderam e construir um bom portfólio. Os desafios de código frequentes também ajudam a consolidar o conhecimento e preparar os alunos para as demandas reais do mercado de trabalho. #### Java Essencial e Desenvolvimento Colaborativo com Git - Bootcamps DIO: Educação Gratuita e Empregabilidade Juntas!; - Introdução à Plataforma Java; - Ambiente de Desenvolvimento Java; - Versionamento de Código com Git e GitHub; - Desafios de Projetos: Crie Um Portfólio Vencedor; - Contribuindo em um Projeto Open Source no GitHub; - Aula Inaugural: Coding The Future GFT – Desenvolvimento Java com IA. #### Dominando a Linguagem de Programação Java - Aprendendo a Sintaxe Java; - Introdução e Estruturas Condicionais com Java; - Estruturas de Repetição em Java; - Java e Tratamento de Exceções; - Debugging Java; - Desafios de Código: Aperfeiçoe Sua Lógica e Pensamento Computacional; - Explorando Desafios de Códigos Básicos em Java. #### Orientação a Objetos e Eficiência na Manipulação de Dados em Java - Fundamentos da Programação Orientada a Objetos com Java; - Pilares da Programação Orientada a Objetos em Java; - Modelando o iPhone com UML: Funções de Músicas, Chamadas e Internet; - Conhecendo Collections Java; - Ganhando Produtividade com Stream API; - Criando um Banco Digital com Java e Orientação a Objetos; - Dominando Desafios de Códigos Intermediários em Java. #### Bancos de Dados, Padrões de Projeto e APIs com Spring Boot - Introdução a Banco de Dados Relacionais (SQL); - Imersão no Spring Framework com Spring Boot; - Criando uma API REST Documentada com Spring Web e Swagger; - Adicionando Segurança a uma API REST com Spring Security; - Design Patterns com Java: Dos Clássicos (GoF) ao Spring Framework; - Publicando Sua API REST na Nuvem Usando Spring Boot 3, Java 17 e Railway; - Avalie este Bootcamp. ### Oportunidades e Networking Os participantes terão seu perfil disponibilizado para empresas parceiras da DIO através do programa Talent Match, aumentando significativamente suas chances de recrutamento nas áreas de tecnologia mais procuradas. ### Datas Importantes - Abertura de inscrições: 22/05/2024; - Data de término das inscrições: 30/06/2024; - Evento de Lançamento: 03/06/2024. ### Sessões Ao Vivo com Experts Aprenda diretamente com especialistas renomados através de sessões ao vivo que oferecem insights diretos sobre as práticas atuais da indústria e futuras tendências tecnológicas. <aside> <div>Você pode gostar</div> <div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Bootcamp-Java-E-IA-280x210.png" alt="Bootcamp Java E IA" title="Bootcamp Java E IA"></span> </div> <span>Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio</span> <a href="https://guiadeti.com.br/bootcamp-desenvolvimento-java-ia-gratuito/" title="Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/05/Certificacao-GitHub-280x210.png" alt="Certificação GitHub" title="Certificação GitHub"></span> </div> <span>Certificação GitHub Foundations: Concorra A Voucher Para Exame</span> <a href="https://guiadeti.com.br/certificacao-github-foundations-voucher-para-exame/" title="Certificação GitHub Foundations: Concorra A Voucher Para Exame"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/05/Masterclass-De-Inteligencia-Artificial-280x210.png" alt="Masterclass De Inteligência Artificial" title="Masterclass De Inteligência Artificial"></span> </div> <span>Masterclass De Inteligência Artificial: Aprenda Do Zero Gratuitamente</span> <a href="https://guiadeti.com.br/masterclass-inteligencia-artificial-gratuita-2/" title="Masterclass De Inteligência Artificial: Aprenda Do Zero Gratuitamente"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/05/Marketing-Digital-Santander-280x210.png" alt="Marketing Digital Santander" title="Marketing Digital Santander"></span> </div> <span>Curso De Marketing Digital Gratuito E Online Do Santander</span> <a href="https://guiadeti.com.br/curso-marketing-digital-gratuito-online-santander/" title="Curso De Marketing Digital Gratuito E Online Do Santander"></a> </div> </div> </div> </aside> ## Java e IA Java, uma das linguagens de programação mais populares e versáteis, tem se mostrado uma escolha robusta para desenvolvedores que trabalham com Inteligência Artificial (IA). Pela sua portabilidade, facilidade de uso, recursos de gerenciamento de memória e rica biblioteca de classes, a linguagem facilita o desenvolvimento de sistemas complexos de IA que são escaláveis e eficientes. ### Por que Java para Inteligência Artificial? Java é conhecida por sua forte gestão de memória e segurança, o que é fundamental ao lidar com grandes volumes de dados e processamento intensivo, características comuns em projetos de IA. O gerenciamento automático de memória ajuda a prevenir vazamentos de memória e outros problemas relacionados, tornando Java ideal para longas sessões de treinamento de modelos de IA que requerem estabilidade e eficiência. ### Recursos Variados A variedade de bibliotecas e frameworks disponíveis nessa linguagem, como o Weka, Deeplearning4j, e o Apache Mahout, oferece aos desenvolvedores ferramentas poderosas para machine learning e processamento de dados em grande escala. As bibliotecas são bem documentadas e mantidas, com uma grande comunidade de usuários que podem fornecer suporte e contribuições. ### Aplicações de IA Utilizando Java Java é frequentemente usado para desenvolver aplicações de machine learning devido à disponibilidade de frameworks como Deeplearning4j. O framework é específico para Java e permite a construção e o treinamento de redes neurais profundas dentro de ambientes Java, facilitando a integração com aplicações empresariais já existentes. ### Processamento de Linguagem Natural (PLN) A linguagem também tem função em PLN. Bibliotecas como OpenNLP e Stanford NLP oferecem funcionalidades para tarefas de PLN como tokenização, etiquetação morfossintática, e análise de sentimento, permitindo aos desenvolvedores implementar interfaces de usuário mais interativas e inteligentes. ### Sistemas de Recomendação Java é ideal para desenvolver sistemas de recomendação robustos que podem escalar para lidar com grandes conjuntos de dados de usuários e produtos. Frameworks como Apache Mahout têm capacidades para processamento colaborativo e filtragem baseada em conteúdo, essenciais para sistemas de recomendação personalizados. ### Desafios e Considerações Apesar de suas muitas vantagens, trabalhar com essa linguagem em projetos de IA pode ser desafiador para iniciantes devido à complexidade da linguagem e do ambiente necessário para efetivamente gerenciar projetos de IA. ### Integração com Outras Tecnologias Integrar Java com outras tecnologias de IA, especialmente aquelas que são mais frequentemente desenvolvidas em Python, pode requerer trabalho adicional. Embora existam pontes e APIs para facilitar essa integração, os desenvolvedores devem estar preparados para possíveis desafios. ## GFT A GFT é uma empresa multinacional de consultoria e serviços de TI especializada em soluções digitais para o setor financeiro. Fundada em 1987 na Alemanha, a GFT cresceu rapidamente para se tornar uma referência em inovação e tecnologia, prestando serviços a grandes instituições financeiras em todo o mundo. Tendo uma forte presença na Europa, Américas e Ásia, a empresa se destaca por sua capacidade de integrar soluções tecnológicas sofisticadas com as necessidades específicas do setor financeiro. ### Serviços e Soluções Oferecidos pela GFT Um dos principais focos da GFT é ajudar os clientes a navegar pela transformação digital. A empresa oferece uma gama completa de serviços, desde a consultoria estratégica até a implementação de tecnologias avançadas, como cloud computing, inteligência artificial e blockchain. Essas soluções são projetadas para otimizar processos, aumentar a eficiência operacional e melhorar a experiência do cliente nos bancos e outras instituições financeiras. ### Desenvolvimento de Software Personalizado A GFT é conhecida por seu desenvolvimento de software sob medida, criando soluções que se ajustam perfeitamente às necessidades e aos requisitos específicos de cada cliente. Utilizando metodologias ágeis e as mais recentes tecnologias, a GFT garante que seus projetos sejam entregues com alta qualidade e dentro do prazo, contribuindo significativamente para o sucesso dos negócios de seus clientes. ### Inovação e Pesquisa na GFT A GFT estabeleceu vários centros de inovação ao redor do mundo, onde equipes multidisciplinares trabalham em pesquisa e desenvolvimento de novas tecnologias e soluções. Esses centros são essenciais para a estratégia de inovação da empresa, permitindo a exploração de novas ideias e o desenvolvimento de protótipos em um ambiente que estimula a criatividade e a colaboração. ### Sustentabilidade e Responsabilidade Social A GFT está comprometida com a sustentabilidade e a responsabilidade social corporativa. A empresa implementa políticas que minimizam seu impacto ambiental e promove iniciativas que contribuem para o bem-estar da comunidade. Isso inclui investimentos em educação tecnológica, apoio a projetos de inclusão digital e participação em programas de desenvolvimento comunitário. ## Acelere sua carreira em tecnologia com o Bootcamp. Inscreva-se agora e transforme seu futuro! As [inscrições para o Coding The Future GFT –Desenvolvimento Java com IA](https://www.dio.me/bootcamp/coding-future-gft-desenvolvimento-java-com-ia) devem ser realizadas no site da DIO. ## Conhece alguém interessado em Java e IA? Compartilhe e abra portas para oportunidades incríveis! Gostou do conteúdo sobre o bootcamp gratuito? Então compartilhe com a galera! O post [Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio](https://guiadeti.com.br/bootcamp-desenvolvimento-java-ia-gratuito/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br).
guiadeti
1,873,932
Introduction to Arrays in JavaScript
Array is a commonly used tool for storing data in a structured collection. There are two different...
16,763
2024-06-03T15:00:00
https://www.thedevspace.io/course/javascript-arrays
javascript, webdev, tutorial, programming
[Array](https://www.thedevspace.io/course/javascript-arrays) is a commonly used tool for storing data in a structured collection. There are two different syntaxes for creating arrays in JavaScript: ```javascript let arr = new Array("Apple", "Orange", "Banana"); console.log(arr); ``` ```text [ 'Apple', 'Orange', 'Banana' ] ``` Don't forget the keyword `new` here. This is different from how we created strings using `String()` or created numbers with `Number()`. Array is not a primitive data type, it is a special kind of object, and `new` is how we can create a new instance of an object in JavaScript. If this doesn't make sense, don't worry, we will get back to this topic after we've discussed [objects](https://www.thedevspace.io/course/javascript-objects) and the [Object-Oriented Programming](https://www.thedevspace.io/course/javascript-object-oriented-programming). For now, you only need to be familiar with the syntax. The second method to create an array is to use a pair of square brackets, which is also referred to as the array literal: ```javascript let arr = ["Apple", "Orange", "Banana"]; console.log(arr); ``` ```text [ 'Apple', 'Orange', 'Banana' ] ``` In most cases, the second syntax is used as it is the most convenient. The elements inside the array can be any data type you want. For example, here is an array with numbers, `BigInt`, strings, Boolean values, `null`, `undefined`, arrays, objects, and even functions: ```javascript // prettier-ignore let arr = [ 100, // Number 999999999999999999n, // BigInt "Qwerty", // String null, // null undefined, // undefined [1, 2, 3], // Array { name: "John Doe" }, // Object function doNothing() { // Function return null; }, ]; console.log(arr); ``` ```text [ 100, 999999999999999999n, 'Qwerty', null, undefined, [ 1, 2, 3 ], { name: 'John Doe' }, [Function: doNothing] ] ``` ## Accessing an array element You can retrieve an array element by specifying its index number. ```javascript let arr = [5, 6, 7, 8]; console.log(arr[0]); console.log(arr[1]); console.log(arr[2]); console.log(arr[3]); ``` ```text 5 6 7 8 ``` It is important to note that the index starts from `0`, not `1`, so `a[0]` points to the first element, `a[1]` points to the second element, and so on. Alternatively, you can use the `at()` method associated with arrays. For instance, ```javascript let arr = [5, 6, 7, 8]; console.log(arr.at(0)); console.log(arr.at(1)); console.log(arr.at(2)); console.log(arr.at(3)); ``` The `at()` method works the same as the `arr[<index>]` syntax, except when you need to retrieve the last item in the array. Most other programming languages offer what is called the negative indexing, which allows you to retrieve the last item in an array with `arr[-1]`, but that is not possible in JavaScript. So for a long time, people's solution was to use the length of the array like this: ```javascript let arr = [5, 6, 7, 8]; console.log(arr[arr.length - 1]); ``` ```text 8 ``` `arr.length` gives the number of elements in the array, which is 4 in this case. But remember the index starts from 0, so the index of the last item should be `arr.length - 1`, which gives 3. Recently, the `at()` method was introduced to offer an easier solution, allowing you to use `at(-1)` to access the last item. ```javascript let arr = [5, 6, 7, 8]; console.log(arr.at(-1)); ``` ```text 8 ``` You can also change an array element using the assignment operator (`=`). ```javascript let a = [5, 6, 7, 8]; a[1] = 100; console.log(a); ``` ```text [ 5, 100, 7, 8 ] ``` ## Adding and removing array elements Besides `at()`, there are also methods that enable you to add or remove elements from the array. - `pop()` The `pop()` method removes the last item from the end of the array. ```javascript let arr = ["Apple", "Orange", "Banana"]; arr.pop(); console.log(arr); ``` ```text [ 'Apple', 'Orange' ] ``` - `push()` The `push()` method adds new items to the end of the array. ```javascript let arr = ["Apple", "Orange", "Banana"]; arr.push("Plum"); console.log(arr); ``` ```text [ 'Apple', 'Orange', 'Banana', 'Plum' ] ``` - `shift()` The `shift()` method removes the first element from the beginning of the array, and then shifts all other elements to lower indexes. ```javascript let arr = ["Apple", "Orange", "Banana"]; arr.shift(); console.log(arr); ``` ```text [ 'Orange', 'Banana' ] ``` - `unshift()` The `unshift()` method moves all elements to higher indexes, and add a new item to the beginning of the array. ```javascript let arr = ["Apple", "Orange", "Banana"]; arr.unshift("Plum"); console.log(arr); ``` ```text [ 'Plum', 'Apple', 'Orange', 'Banana' ] ``` In practice, the `shift()` and `unshift()` methods are much slower compared to `pop()` and `push()`, due to the shifting of the array elements. If possible, you should avoid working at the beginning of the array, and only use `pop()` and `push()` in your code. ## Concatenating arrays JavaScript also allows you to concatenate multiple arrays together into one array using the `concat()` method. For example, ```javascript let arr1 = ["Apple", "Orange", "Banana"]; let arr2 = ["Plum", "Peach", "Pear"]; arr1 = arr1.concat(arr2); console.log(arr1); ``` ```text [ 'Apple', 'Orange', 'Banana', 'Plum', 'Peach', 'Pear' ] ``` The `concat()` method also enables you to join more than two arrays. ```javascript let arr1 = ["Apple", "Orange", "Banana"]; let arr2 = ["Plum", "Peach"]; let arr3 = ["Pear"]; arr1 = arr1.concat(arr2, arr3); console.log(arr1); ``` ```text [ 'Apple', 'Orange', 'Banana', 'Plum', 'Peach', 'Pear' ] ``` Alternatively, you can use the [spread syntax](https://www.thedevspace.io/course/javascript-rest-parameter-spread-syntax) (`...`). ```javascript let arr1 = ["Apple", "Orange", "Banana"]; let arr2 = ["Plum", "Peach"]; let arr3 = ["Pear"]; let arr = [...arr1, ...arr2, ...arr3]; console.log(arr); ``` ```text [ 'Apple', 'Orange', 'Banana', 'Plum', 'Peach', 'Pear' ] ``` ## Searching arrays - `indexOf()` and `lastIndexOf()` Using the `indexOf()` method, you can locate the first occurrence of the given item in the array. ```javascript let arr = ["Apple", "Orange", "Orange"]; console.log(arr.indexOf("Orange")); ``` ```text 1 ``` Notice that there are two `"Orange"`s in this array, but only the location of its first occurrence is returned. If the element does not exist in the array, the method will return `-1`. ```javascript let arr = ["Apple", "Orange", "Banana"]; console.log(arr.indexOf("Peach")); ``` ```text -1 ``` `lastIndexOf()` is the opposite of `indexOf()`. It returns the location of the last occurrence of the item. ```javascript let arr = ["Apple", "Orange", "Orange"]; console.log(arr.lastIndexOf("Orange")); ``` ```text 2 ``` Similarly, if the element does not exist in the array, `-1` will be returned. ```javascript let arr = ["Apple", "Orange", "Orange"]; console.log(arr.lastIndexOf("Peach")); ``` ```text -1 ``` - `find()` `find()` is one of the more advanced methods for arrays, as it searches the array based on a test function. If you are new to programming, and have no idea what a function is, you can go through the [function lessons](https://www.thedevspace.io/course/javascript-functions) first, and then come back to this topic. ```javascript let arr = [23, -5, 667, 1, -3, 6, 17, -69]; let answer = arr.find(testFunction); // This example test function finds the first array element that is greater than 50 function testFunction(value, index, array) { return value > 50; } console.log(answer); ``` ```text 667 ``` The `find()` method will pass each item in the array to the test function, and the first element that passes the test will be returned. The test function should accept three arguments, `value`, which corresponds to each element in the array, `index`, the index number of that element, and `array`, which is the entire array. You will encounter many more helper functions like this that accept a predefined list of arguments, and some of them might be difficult to understand. In this case, you could print them out into the console using `console.log()`. This would help you understand what they are, and what you can do with them. ```javascript let arr = [23, -5, 667, 1, -3, 6, 17, -69]; let answer = arr.find(testFunction); // This example test function finds the first array element that is greater than 50 function testFunction(value, index, array) { console.log(`Value: ${value}`); console.log(`Index: ${index}`); console.log(`Array: ${array}`); console.log("\n"); return value > 50; } console.log(answer); ``` ```text Value: 23 Index: 0 Array: 23,-5,667,1,-3,6,17,-69 Value: -5 Index: 1 Array: 23,-5,667,1,-3,6,17,-69 Value: 667 Index: 2 Array: 23,-5,667,1,-3,6,17,-69 ``` - `filter()` The `filter()` method is similar to `find()`, except instead of returning a single value that passes the test, `filter()` returns an array of values. ```javascript let arr = [23, -5, 667, 150, -3, 60, 17, -69]; let answer = arr.filter(testFunction); function testFunction(value, index, array) { return value > 50; } console.log(answer); ``` ```text [ 667, 150, 60 ] ``` - `every()` The `every()` method iterates over the entire array, and examines if the element passes a test. If all element passes, `every()` returns `true`, and if not, `every()` returns `false`. ```javascript let arr = [1, 2, 3, 4, 5]; let answer = arr.every(testFunction); // Check if all elements are greater than 0 function testFunction(value, index, array) { return value > 0; } console.log(answer); ``` ```text true ``` - `includes()` The `includes()` method tests if a given value exists in the array. ```javascript let arr = [1, 2, 3, 4, 5]; let answer = arr.includes(100); console.log(answer); ``` ```text false ``` You can also provide an index, which tells the `includes()` method to check if the value exists at the exact index. ```javascript let arr = [100, 2, 3, 4, 5]; let answer = arr.includes(100, 2); console.log(answer); ``` ```text false ``` Notice that even though `100` exists in the array, it does not exist at the index `2`, so the method returns `false`. ## Sorting arrays JavaScript offers four different methods that allow you to sort the array, `sort()`, `toSorted()`, `reverse()`, and `toReversed()`. ### Sorting strings By default, these methods are used to sort arrays with string values. - `sort()` will sort the array alphabetically. ```javascript let arr = ["Apple", "Orange", "Banana"]; arr.sort(); console.log(arr); ``` ```text [ 'Apple', 'Banana', 'Orange' ] ``` - `reverse()` will reverse the array. ```javascript let arr = ["Apple", "Orange", "Banana"]; arr.reverse(); console.log(arr); ``` ```text [ 'Banana', 'Orange', 'Apple' ] ``` - `toSorted()` is just like `sort()`, except it returns the sorted result without altering the original array. ```javascript let arr = ["Apple", "Orange", "Banana"]; let sortedArr = arr.toSorted(); console.log(sortedArr); console.log(arr); ``` ```text [ 'Apple', 'Banana', 'Orange' ] [ 'Apple', 'Orange', 'Banana' ] ``` - `toReversed()` method is just like `reverse()`, except it returns the sorted result without altering the original array. ```javascript let arr = ["Apple", "Orange", "Banana"]; let sortedArr = arr.toReversed(); console.log(sortedArr); console.log(arr); ``` ```text [ 'Banana', 'Orange', 'Apple' ] [ 'Apple', 'Orange', 'Banana' ] ``` ### Sorting numbers The `sort()` method can also be used to sort numbers, but it requires a bit more customization. Because by default, these methods will convert the numbers into strings, and then sort them alphabetically, which would give results like this: ```javascript let arr = [2, -1, 45, 3, 21, 17, 9, 20]; console.log(arr.toSorted()); ``` ```text [ -1, 17, 2, 20, 21, 3, 45, 9 ] ``` To sort numbers, you must pass a compare function. ```javascript let arr = [2, -1, 45, 3, 21, 17, 9, 20]; arr.sort(compareFunction); function compareFunction(a, b) { return a - b; } console.log(arr); ``` ```text [ -1, 2, 3, 9, 17, 20, 21, 45 ] ``` The compare function takes two input values, `a` and `b`, and return either a positive value, a negative value, or 0. The function compares all the values in the array, two numbers at a time. - If the function returns positive, `b` will be placed before `a`. - If the function returns negative, `a` will be placed before `b`. - If the function returns 0, no changes will be made. This example sort the array in a ascending order. To sort in the descending order, simply make the compare function return `b - a`. ```javascript let arr = [2, -1, 45, 3, 21, 17, 9, 20]; arr.sort(compareFunction); function compareFunction(a, b) { return b - a; } console.log(arr); ``` ```text [ 45, 21, 20, 17, 9, 3, 2, -1 ] ``` ## Iterating over an array Another operation we often perform on arrays is iterating over all of its values. The most straightforward way to do this is with a [loop](https://www.thedevspace.io/course/javascript-for-loops). If you don't know what a loop is, please go through the linked lesson first. ### Using loops Take a look at this example: ```javascript for (let i = 0; i < a.length; i++) { . . . } ``` `let i = 0` initiates the variable `i` which will be the index number of each array element. The index `i` starts from 0. `i` will increment by 1 for every iteration (`i++`, which is a shorthand for `i = i + 1`), until it reaches `a.length`. `a.length` is the length of the array, which means the loop will terminate when `i` equals `a.length - 1`. Because for the next iteration, `i` will become `a.length`, which violates the condition `i < a.length`. Besides using the index, you can also access each array element with a `for of` loop, which looks like this: ```javascript let arr = [2, -1, 45, 3, 21, 17, 9, 20]; for (let ele of arr) { console.log(ele); } ``` ```text 2 -1 45 3 21 17 9 20 ``` ### Using methods There are also two built-in methods in JavaScript that allow you to iterate over the array without having to use a loop. This is one of the greatest features in JavaScript, because for most other programming languages, you will have to create the loop your self. The first method is `forEach()`. Here is an example of calculating the sum of all array elements using the `forEach()` method. ```javascript let arr = [2, -1, 45, 3, 21, 17, 9, 20]; let sum = 0; arr.forEach(calcSum); function calcSum(value, index, array) { sum = sum + value; } console.log(sum); ``` ```text 116 ``` If you would like to perform some actions to each array element, and then return the transformed result, use the `map()` method. For example, here we are using `map()` to return an array whose elements are the square of the original value. ```javascript let arr = [2, -1, 45, 3, 21, 17, 9, 20]; let sum = 0; let squared = arr.map(calcSum); function calcSum(value, index, array) { return value ** 2; } console.log(squared); console.log(arr); ``` ```text [ 4, 1, 2025, 9, 441, 289, 81, 400 ] [ 2, -1, 45, 3, 21, 17, 9, 20 ] ``` The `map()` method will return a new array with the transformed elements, and the original will not be changed. ## Matrix Matrix is a concept that exists in both mathematics and computer science. It is defined as a two dimensional array with its elements arranged in rows and columns like this: ![Matrix](https://www.thedevspace.io/media/course/javascript/matrix.excalidraw.png) It is possible for you to create such a structure using JavaScript by putting several smaller arrays inside a bigger array. ```javascript let matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ]; ``` To retrieve an element from this matrix, you need to specify two indexes, `matrix[<row>][<col>]`. ```javascript console.log(matrix[0][1]); // -> 2 console.log(matrix[1][0]); // -> 4 console.log(matrix[2][2]); // -> 9 ``` ## Further readings - [What are the Data Types in JavaScript](https://www.thedevspace.io/course/javascript-data-types) - [What are Maps and Sets in JavaScript](https://www.thedevspace.io/course/javascript-maps-sets) - [What are Higher Order Functions in JavaScript](https://www.thedevspace.io/course/javascript-higher-order-functions) - [JavaScript and Asynchronous Programming](https://www.thedevspace.io/course/javascript-asynchronous-programming) - [How to Optimize Your Web App for Better Performance](https://www.thedevspace.io/course/miscellaneous-web-app-optimization)
huericnan
1,875,554
Changing SQL Dialect From Teradata To SQLite
annotated notes from converting a 2,000 line SQL script
0
2024-06-03T14:59:38
https://dev.to/geraldew/changing-sql-dialect-from-teradata-to-sqlite-i23
sql, database, data
--- title: Changing SQL Dialect From Teradata To SQLite published: true description: annotated notes from converting a 2,000 line SQL script tags: SQL,Database,Data # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-03 14:54 +0000 --- ## Source of This Annotation I recently had the occasion to convert an interesting SQL script of mine from being in Teradata SQL to running in SQLite. As I did so, I made notes of the various specific changes I had to make. From those, I have plucked out the concepts and some examples and then added annotations.It is not a master guide for the topic, rather is just some sharing of a single experience. ## Environment Some of the issues encountered are probably **not** about the difference in dialect, but are instead just that my place in the two environments are quite different. For Teradata the environment is mostly not in my control. For SQLite, while it is completely in my control, I've not added any elements that might make it more equivalent - largely because the intention is to easy to replicate. The most immediate aspect of this is that of namespaces, where for Teradata I must use multiple namespaces as forced by system-wide settings. For SQLite everything happens inside a single namespace. These issues will be explicitly apparent in the text below, but it will **not** call out which are due to the dialect difference and which to the environment. ## Details Enough preamble - into the details we go. These are not in any specific order, as this has merely been adapted from run-and-hit-error approach to making all the required changes. ### Table names While in some senses obvious - that in changing from one platform to another, that you might not have exactly the same named tables present - another aspect is that the whole structure of name referencing can be different. From `DatabaseName.TableName` To `TableName` However, that kind of change was going to cause problems for situations where I had actually made use of the different databases as name spaces. For example, it happens that where I work, I don't have system permissions to create views and macros in the same databases where I can make tables. And while that is a hassle and requires me to instead make any Views and Macros in my own "user" database (because in Teradata, a user account IS a user's own database) that is also handy. For example, I used my own space as a place to create a view `MyUser.NameOfPurpose` and then use it to make a table of the same name but different *database location* as in: ``` SQL CREATE TABLE DatabaseName.NameOfPurpose AS SELECT * FROM MyUser.NameOfPurpose ``` Which is nice and neat, but then poses a problem if we're translating everything to a single database/namespace. I'm not saying the following is a great strategy, but it was my quick *during the process* choice to make use of prefixes in lieu of namespaces and it got me through this time. So I would change from - `MyUser.Something` To - `VEW_Something` and from `WorkingDatabase.MyUser_Something` (because a polite convention where I work is to put our usernames as a prefix*) To `TBL_Something` If I was going to do more of this, then I might come up with something more sophisticated. - (* by the way, the politeness is to not make other analysts look up the data dictionary to get the `CREATORNAME` and also has a nice effect that our general object names can't interfere with each other) ### ANZSIC Resource changes This section is unique to the specific purpose I had for this conversion, which was that is was using the [Australian and New Zealand Standard Industrial Classification (ANZSIC)](https://www.abs.gov.au/statistics/classifications/australian-and-new-zealand-standard-industrial-classification-anzsic/2006-revision-2-0) codes. Follow that link if you want to know more about these. My understanding is that while an independent standard used across Australia and New Zealand, there are similar things in other parts of the world. - And indeed I've been using these kinds of codes in my data work for over 20 years. As the whole purpose of the SQL script I was converting is to do *something interesting* with data classified using these codes, necessarily I also had to ensure the handling of these codes was correctly adapted from one environment to the other. I will leave this section here, it may still be of interest as the kind of changes that occur in this type of exercise. Things I had to deal with included: - Reduce from 5 digit coding to just 4 - Specific table and column names - Change some specific ANZSIC codes #### Reduce from 5 digit coding to just 4 As it happens, in my workplace, we enhance some ANZSIC codes by adding an extra digit. In picking up both the ANZSIC definitions and some example data from public "open data" resources, these use only the four digit codes, some various changes had to be made to suit that. #### Specific table and column names Similarly, at my workplace, the reference tables for the ANZSIC codes were made for me by other people (and had the added 5th digit) so in downloading them independently from the ABS (Australian Bureau of Statistics) I don't have the same table structures on tap. As it happened, this could have given me the chance to do something more appropriate in terms of the structures I would make but didn't want to be rewriting a lot of things while I was mainly just trying to adapt some 2,000 lines of SQL. So I chose a compromise, loading the lookup tables in a way that was convenient to construct from the downloads but then using a custom view to emulate the structure assumed by the script I was converting. #### Change some specific ANZSIC codes As the script that I was converting has a special feature of being able to isolate a single (or small set of) ANZSIC codes for treatment, I didn't have any wish to expose which specific ones my workplace had a special interest in. So I chose another one that seemed to have a similar distribution property (in totally different data, mind). So that meant changing each place where the script had (the *secret* digit sequence): - 'NNNN' and replacing it with - '8512' Note that those are both strings/chars because while many of the ANZSIC codes are digits, not all are. ### Making Views Syntax Change On Teradata you can make a View using either the keyword `CREATE` or the keyword `REPLACE` - with the latter working even when a view of that name already exists. As a consequence there is no reason not to always use `REPLACE`. But SQLite doesn't have this feature. - FWIW by contrast, Teradata SQL *still* doesn't have a form of `DROP` that is safe to use regardless of whether an item does or does not already exist. Yes, there are some work-arounds but that's a whole other topic. Therefore, a first step is to search/replace throughout the script to find every - `REPLACE VIEW` and change it to - `CREATE VIEW` Actually, that's not quite enough, because in part, one writes scripts so they can be run and re-run. For SQLite this is a simple matter of putting a bulletproof `DROP` before each `CREATE` ``` SQL DROP VIEW NameOfView IF EXISTS ; CREATE VIEW NameOfView AS ``` ### Making Macros Significant Change While noting that, like with views, for its macro syntax Teradata allows a `REPLACE MACRO` as well as a `CREATE MACRO` - but that's not really the problem. Instead, the situation is that SQLite doesn't appear to have any "macro" feature. Luckily, that's not quite true. Because what SQLite does have, is triggers. And while a trigger has to be tied to some other database action, it provides a way of defining a statement that will execute. - Of course, a Teradata macro has quite a few other aspects, such as parameter passing, but in this case I didn't need any of that. So for converting a Teradata Macro into SQL, what we do is create a Trigger for a view that does nothing. To execute the Trigger, we do a DELETE FROM for the view name. In SQLite the delete would fail on the view but still triggers the .. trigger to execute. Ergo, we have a kind of macro. So, if we had a macro named MacroName, then in Teradata SQL we would do: ``` SQL REPLACE MACRO MacroName AS ( -- sql statements ; ) ; ``` And to get a similar effect in SQLite, we first create a dummy view: ``` SQL CREATE VIEW View4Trigger_MacroName AS SELECT CURRENT_TIME ; ``` then we define a trigger that will do our bidding ``` SQL CREATE TRIGGER Trigger4_MacroName INSTEAD OF DELETE ON View4Trigger_MacroName BEGIN -- sql END ; ``` and then we trigger the trigger, with a statement that does: ``` SQL DELETE FROM View4Trigger_MacroName ; ``` #### Whither Stored Procedures p.s. Who said "Stored Procedure" ? Actually, you won't find "macro" mentioned anywhere - and that's because macros are a Teradata oddity. There's a lot I could say about this, but it's just Teradata history and not particularly relevant here. Teradata did eventually add Stored Procedures to its feature set - quite a long time ago now - but macros are still there and have some permission conveniences. ### Remove Temporary Scaffolding Code This point merely says something about how I write Teradata SQL, which is that: - I can be lazy about writing `CREATE` statements and so will use a `CREATE .. AS` construct to make a table and then use `SHOW TABLE` to get a CREATE statement ready to adapt. - but I also don't leave a script with a CREATE AS in it, and replace it with a `CREATE` for making an empty table and then do an `INSERT` to populate it. As a consequence, I will end up with a script which included a whole bunch of `CREATE AS WITH NO DATA` and `SHOW TABLE` then `DROP TABLE` statement combinations lying around. In the SQLite environment - and with the script feature complete - these were just extraneous and could be deleted. ### Remove Character Set Declarations One of the consequences of my `CREATE AS` and `SHOW TABLE` method is that the table creations I thereby get will have all the specific defaults applied and made explicit. In the Teradata environment that's quite a plus. But they immediately caused me problems in SQLite and had to go. So any column definitions that had the following text - `CHARACTER SET LATIN NOT CASESPECIFIC` had to have them removed. ### Remove Set Table Declarations Along the same line as the above, the Teradata (good) default is to have "set" tables (as opposed to "multiset") but this is not a feature of SQLite and so had to go. Thus - `CREATE SET TABLE` becomes simply - `CREATE TABLE` ### Remove More Table Declarations And to cut a long story short, for my script the following stock clause also had to be removed from `CREATE TABLE` statements: ``` SQL FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT, DEFAULT MERGEBLOCKRATIO, MAP = TD_MAP1 ``` ### UPI recoding Where `UPI` = `UNIQUE PRIMARY INDEX` In Teradata, the equivalent to a "primary key" is a "unique primary index". But as well as the keyword change, the SQLite syntax is different. In Teradata the UPI setting comes as a clause *after* the list of column definitions has been closed - with a `)` character. In SQLite, the primary key setting is done *inside* the column list. - *Actually in SQLite it has two forms of that but we'll only use one here.* From ``` SQL LastColumnName DataType ) UNIQUE PRIMARY INDEX ( PrimaryIndexColumn ) ; ``` To ``` SQL LastColumnName DataType , PRIMARY KEY ( PrimaryIndexColumn ) ) ; ``` ### SAMPLE and TOP This is yet another one of those things that simply varies among SQL dialects - and frankly I have no which one, if either, is **in the SQL standard**. Teradata has a `SAMPLE` clause, but as I was only using it for some minor data testing of view constructions, could be easily be replaced. Ditto for the Teradata syntax of using `SELECT TOP x` at the beginning of a `SELECT` statement. Both of those could be replaced by using the SQLite syntax of add `LIMIT x` at the end of a `SELECT` statement. While not important anywhere, this was annoying to enact as it couldn't be done by simple search/replace actions in the editor. ### No System Calendar Pseudo Table This is another small thing that I've become used to doing on Teradata, because it supplies a built-in pseudo table that generates a full calendar. While obviously useful for calendar related things, the fact that it has a `day_of_calendar` column makes it easy to construct scratch tables as if out of thin air. In the script at hand I was using it as the base for some cross joins to generate a full set of digit combinations. As this was only a matter of ten rows - for the digits "0" to "9" - I chose to replace it with a real table and simply wrote ten insert to literally populate it with the desired digits. ``` SQL DROP TABLE IF EXISTS TBL_1_Digits ; CREATE TABLE TBL_1_Digits ( DigitChar CHAR(1) , PRIMARY KEY ( DigitChar ) ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '0' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '1' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '2' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '3' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '4' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '5' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '6' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '7' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '8' ) ; INSERT INTO TBL_1_Digits ( DigitChar ) VALUES ( '9' ) ; ``` In retrospect that was probably a better solution all round - I'm used to using the Teradata `sys_calendar.CALENDAR` for all kinds of things in many kinds of scales, so my general reasons for doing so remain valid. ### Replace FULL OUTER JOIN As SQLite does not support `FULL OUTER JOIN` then something will have to be done instead. As it happens, this is a well covered topic, with some opinions being that FULL OUTER JOIN is something that **should be avoided** *even where it is supported*. The stock advice seems to be to replace it with a combination of UNION and GROUP BY structures. Here is what I had as Teradata SQL ``` SQL -- Attempt a coalesce of the two lookup methods SELECT COALESCE( AZ_ML.At_Lvl_Cd, DC_AU.DigitsN ) AS At_Lvl_Cd , COALESCE( AZ_ML.Lvl_At , CHARACTER_LENGTH( DC_AU.DigitsN) ) AS Lvl_At , COALESCE( AZ_ML.Up_Lvl_Cd, DC_AU.DigitsM ) AS Up_Lvl_Cd FROM DbName.MyUser_ANZSIC_DigitCharsAndUps AS DC_AU FULL OUTER JOIN DbName.MyUser_ANZSIC_Multi_Level_Lookup_Base AS AZ_ML ON AZ_ML.At_Lvl_Cd = DC_AU.DigitsN ; ``` And here is the SQLite replacement ``` SQL -- Attempt a coalesce of the two lookup methods SELECT U.At_Lvl_Cd , MAX( U.Lvl_At ) AS Lvl_At , MAX( U.Up_Lvl_Cd ) AS Up_Lvl_Cd FROM ( -- U SELECT AZ_ML.At_Lvl_Cd AS At_Lvl_Cd , AZ_ML.Lvl_At AS Lvl_At , AZ_ML.Up_Lvl_Cd AS Up_Lvl_Cd FROM TBL_ANZSIC_Multi_Level_Lookup_Base AS AZ_ML UNION SELECT DC_AU.DigitsN AS At_Lvl_Cd , LENGTH( DC_AU.DigitsN) AS Lvl_At , DC_AU.DigitsM AS Up_Lvl_Cd FROM TBL_ANZSIC_DigitCharsAndUps AS DC_AU ) AS U GROUP BY U.At_Lvl_Cd ; ``` To be frank, there is much more that can be said about `FULL OUTER JOIN` versus `UNION .. GROUP BY` but this was all I needed for the few places the issue was present in this script. There is plenty to read elsewhere on this topic. ### CHARACTER_LENGTH While simple, this one caught me by surprise because I thought this function was part of the SQL "standard" - not that that ever means very much in practice. Anyway, the solution was a simple change of function name. I replaced `CHARACTER_LENGTH` with `LENGTH` ### QUALIFY Teradata is one of the dialects that has `QUALIFY` and uses it to enable filtering in the same query layer as a "window" function is declared. In short, SQLite does not have this. In my case, this is not new, as I've previously had to re-code from Teradata to HiveQL, which is similar is having window functions but not a "QUALIFY" clause. What it requires is adding an extra layer of table abstraction - as a derived table or a CTE (common table expression) and then use a WHERE clause to do what the QUALIFY did. For example, here is how it might look in Teradata SQL ``` SQL SELECT T.* FROM TheTableName AS T QUALIFY ROW_NUMBER() OVER ( PARTITION BY GroupCol ORDER By Sortcol ) = 1 ; ``` And a first step to conversion is to make the window function a named element in the SELECT clause. Note that Teradata is happy to apply the QUALIFY to that named value. ``` SQL SELECT T.* , ROW_NUMBER() OVER ( PARTITION BY GroupCol ORDER By Sortcol ) AS Rw_Num FROM TheTableName AS T QUALIFY Rw_Num = 1 ; ``` Now we can abstract all of the above and remove the QUALIFY and instead do the same filtering as a WHERE clause. ``` SQL SELECT D_T.* FROM ( -- D_T SELECT T.* , ROW_NUMBER() OVER ( PARTITION BY GroupCol ORDER By Sortcol ) AS Rw_Num FROM TheTableName AS T ) AS D_T WHERE D_T.Rw_Num = 1 ; ``` With the QUALIFY gone, the syntax is now ready to work in SQLite. - Do note that I'm not saying this will be internally planned and executed the exact same way - or that it won't, that being a complex per-platform topic. ### Change from PRIMARY INDEX Now, you might be excused for thinking this was already covered in the text above. But no, this is another twist, because it is UNIQUE PRIMARY INDEX which is the PRIMARY KEY equivalent. Instead, in Teradata, a non-unique "PRIMARY INDEX" merely assists with data spreading at execution. While this technically means that the clause can just be dropped without having any effect, there's a good chance that the reason it was there will imply some thinking about what should be done instead. In the one case of this for the conversion I was attempting I had used "PRIMARY INDEX" simply because I was too lazy to work out what column combination would be a UPI (i.e. primary key). Having re-assessed that, I made a useful selection and set a new PRIMARY KEY clause. ### Change Syntax for Defining a Recursive View While this was confusing to sort out, from comparative reading of documentation and examples, the change is simple enough - albeit with a couple of twists. Indeed, there were three issues to be dealt with here. Do note: I'm not going to try to explain recursive queries in this context - if you need to gain comfort with those then you will need to seek elsewhere. The three issues are: - Syntax - Final Select - Naming #### Syntax On the face of it, it mainly looks like a change from the Teradata SQL ``` SQL CREATE RECURSIVE VIEW VEW_ANZSIC_Multi_Level_Recursive_Lookup ( At_Lvl_Cd , Lvl_At , Up_Lvl_Cd , Up_Lvl_At , DegreeOfSep ) AS ( ``` To the SQLite form of ``` SQL CREATE VIEW VEW_ANZSIC_Multi_Level_Recursive_Lookup AS WITH RECURSIVE Recursive_Lookup ( At_Lvl_Cd , Lvl_At , Up_Lvl_Cd , Up_Lvl_At , DegreeOfSep ) AS ``` But as we will see, there is more to it than this. #### Final Select ``` CREATE RECURSIVE VIEW NameOfView ( ListOfColumns) AS ( -- Seed Select Statement UNION ALL -- Recursive Select Statement that uses NameOfView ); ``` Note how this compares to a non-view use of recursion in Teradata SQL, where a ***recursive with*** clause is followed by a final `SELECT` that uses it. ``` WITH RECURSIVE NameOfWith ( ListOfColumns) AS ( -- Seed Select Statement UNION ALL -- Recursive Select Statement that uses NameOfWith ) SELECT Something FROM NameOfWith ; ``` Now look back to the `RECURSIVE VIEW` syntactic structure and see that it does not have the final `SELECT`. In effect, that happens when you use the view in a later select. Clearly the Teradata idea of a recursive view is only a way of saving the `WITH RECURSIVE` clause as a named item, available outside its own definition. By comparison, the SQLite idea seems to be that you're merely defining a view - hence `CREATE VIEW` rather than `CREATE RECURSIVE VIEW` but then allows you to put a recursive WITH inside the definition. Does this matter? Well it might. As it happened, the Teradata recursive view that I had written realy required being used with a WHERE clause each time - filtering `WHERE Up_Lvl_Cd IS NOT NULL` In adding that final SELECT inside the view to make it work in SQLite, that WHERE clause was "burnt into" the view. But by the nature of that specific clause, having it also used in later uses of the view would be quite harmless. I suspect that converting in the other direction - from SQLite to Teradata - it would be prudent to add another view just to add the effect of that *final select*. #### Naming Another quirk of the object naming difference between the two environments, is that the Teradata syntax, has the strange thing by which the view may have had to be declared as being in a named database but then its non-database name must be used on the inside. To adapt the syntax example given above, we add `DatabaseName.` to the create clause, but note that it cannot be used inside the definition. ``` CREATE RECURSIVE VIEW DatabaseName.NameOfView ( ListOfColumns) AS ( -- Seed Select Statement UNION ALL -- Recursive Select Statement that uses NameOfView ); ``` That has led to my name changing scheme from earlier steps to have confused things - as only one of the two names was replaced - which led to some *fun* until I twigged to what was going on. As already noted, that problem is not present in the SQLite syntax for which the named recursive element is an alias fully inside the view definition, rather than part of its *outside*. ## Summary To be honest, the process of conversion proved to be both as difficult as I expected (but with more details) all while clearly always going to be possible (thankfully because my knowledge proved to be sufficient to be confident that all the issues were covered). So it was both annoying and yet satisfying to achieve. If you've read this far I hope you found the annotations either helpful or interesting as an exercise. I wrote it because I have often been grateful for when other people openly documented their experiences.
geraldew
1,875,551
Build a reverse ssh tunnel reverse proxy for (not) fun and (not) profit
A few years ago, a team I was part of was working on vivi, our master thesis's project. You can read...
0
2024-06-03T14:58:51
https://matteogassend.com/blog/build-a-reverse-ssh-tunnel-reverse-proxy-for-not-fun-and-not-profit/
networking, ssh, vpn, nginx
--- title: "Build a reverse ssh tunnel reverse proxy for (not) fun and (not) profit" tags: - networking - ssh - vpn - nginx canonical_url: https://matteogassend.com/blog/build-a-reverse-ssh-tunnel-reverse-proxy-for-not-fun-and-not-profit/ cover_image: https://cdn.blog.matteogassend.com/reverse-ssh-cover.webp published: true --- A few years ago, a team I was part of was working on [vivi](https://matteogassend.com/projects/vivi/), our master thesis's project. You can read more about it in the link above if you'd like, but the gist of it is; we were trying to build a solution to handle internet traffic and routing for ephemeral events (think something like comic-con etc) with lots of features and a user interface non-technical users could use and understand. ![Didnt Work The Blob GIF by Max](https://media1.giphy.com/media/JXOl4rFwoJJks7xLAr/giphy.gif?cid=bcfb6944oirknboxyvfaahrhp1g2itqk326lebymiiz76mhf&ep=v1_gifs_search&rid=giphy.gif&ct=g) ## Where does a reverse ssh tunnel reverse proxy fit into this ### Context When the time came to built the box that would live between the router and the rest of the network, we went (al least for the POC - proof of concept) with a RaspberryPi 4. We tried different solutions to handle communications between our backend server and all the machines, including using [balena os](https://www.balena.io/) - which would have worked fine had we not needed to work with the network stack of the device. ### Build it yourself So that's why we ended up building this kind of Frankenstein monster of a service. We needed a way to remotely connect to the device to check its status and perform maintenance operations (i.e. reboots, firmware updates etc). And before you ask: > couldn't you have just used the ip address of the machine or exposed a port on the network to connect to it? ### NAT and local ip addresses Fasterthanlime has an excellent video describing how the internet works, you should check it out {% embed https://www.youtube.com/watch?v=jjKFXlFNR4E&t=4s&pp=ygUOZmFzdGVydGhhbmxpbWU%3D %} ### A (forward) ssh tunnel A normal (or forward) SSH tunnel allows you to securely access a remote service over an encrypted SSH connection. For example, let's say you want to access a web service running on a remote server's port 8000, but that port is blocked by a firewall. You can create an SSH tunnel by connecting to the remote server (assuming port 22 is open) and forwarding a local port, say 9000, to the remote port 8000: you can run the command: ```bash ssh -L 9000:localhost:8000 user@remote_server ``` with this command, you can then open the address `localhost:9000` and the request will then be forwarded to the remote machine's ssh port and then finally reach the destination port (8000 in this case). This is also called **Tcp Forwarding**. ![a schema of the logic behind ssh tunneling](https://cdn.blog.matteogassend.com/reverse-ssh_ssh-tunnel-schema.webp) ### Let's reverse that Now, what happens if the machine I need to connect to is behind a firewall and/or a NAT? It would most likely not work because: - I don't know the machine's actual IP address - I cannot (usually) initiate a connection from outside the local network without an exposed port or something similar And so, enters the reverse ssh tunnel. A reverse ssh tunnel is based on the same principle as the normal ssh tunnel, except it contains an additional step, with the remote machine using the initial connection from the other host to create a new "tunnel" toward the initiating machine. This allows to bypass the issues with firewalls and NATs listed above. For example, to tunnel the SSH port (22) of a remote machine to port 8022 on your local machine, the command would be: ```bash ssh -R 8022:localhost:22 user@local_host -N ``` ## How we built it (attention: do not try this at home, there are easier ways to do this) There are a couple of part to this setup: - The http server onboard each device - The systemd service responsible for the ssh tunnel on each device - The Server handling all of these connections and all the reverse proxy logic ### Http Server The Http server is the API we built to manage the device; it allowed us to remotely restart the device, update the dependencies, retrieve its logs etc... It was a simple NestJS application. ### Systemd Service The systemd service was responsible for receiving a port from the server and configuring the reverse ssh-tunnel to connect to said port. ### Remote Server The remote server was the one receiving all the connections and handling the domain name registration for each device and its corresponding reverse proxy configuration. The code is available [here](https://github.com/vivitek/OpenVivi) if you want to take a look at it (though I would not recommend it) The steps were: - A device would upload its ssh public key to the remote server - The remote server would then generate an nginx config and associate a domain name to a port number - The port number would be sent to the device to configure its service - THe connection was then started on boot ## Why you shouldn't do it this way The main issue with the way we did this is security; there really is no central authority managing this. - The device uploads its ssh key and registers itself with the server - The server never verifies that the request it coming to a legitimate source. - The server also doesn't check to see if the ssh connection for a specific host is aiming for the right port ## Alternatives A simpler way to handle this, given the infrastructure requirements, would probably have been a mesh vpn like tailscale or zerotier - or even plain old wireguard. This would still allow all reverse proxy goodies, but would provide a central authority to handle authentication (the server provides and checks for correct credentials on a new connection). Really, a vpn would probably have simplified a lot of things for this use case
matfire
1,875,549
Spec Coder: Your Powerful AI Co-pilot for Supercharged Coding in Visual Studio Code
Spec Coder is an AI-powered extension that revolutionizes your coding experience within Visual Studio...
0
2024-06-03T14:52:56
https://dev.to/hkp22/spec-coder-your-powerful-ai-co-pilot-for-supercharged-coding-in-visual-studio-code-5dmm
webdev, ai, vscode, programming
Spec Coder is an AI-powered extension that revolutionizes your coding experience within Visual Studio Code. It's like having a super-intelligent coding buddy by your side, always ready to assist you with a vast arsenal of features. {% youtube 88CTOA7jL4s %} 👉 Don't get left behind! Try [Spec Coder: Supercharge Your Coding with AI!](https://qirolab.com/spec-coder) 🔥 . **Choose Your AI Powerhouse** Spec Coder stands out by offering flexibility in its AI engine. You can choose between industry leaders: * **OpenAI:** Leverage advanced models like ChatGPT and GPT-4 for versatile AI assistance, from code suggestions to real-time analysis. * **Gemini Pro:** Powered by Google's AI Studio, Gemini Pro offers cutting-edge AI for an unparalleled coding experience. Additionally, Spec Coder integrates with: * **Hugging Face:** Access a vast library of pre-trained models for specific coding tasks like code generation and AI chat. * **Ollama:** Utilize a platform dedicated to large language models for developers. Ollama offers functionalities like code completion and debugging assistance, even offline! **Spec Coder's Feature Arsenal** Spec Coder goes beyond basic code completion. Here's what it offers: * **AI Chat:** Get real-time coding help and guidance through an interactive interface. * **AI-powered Autocomplete:** Reduce errors and write faster with intelligent code suggestions. * **Manual Code Generation:** Need a specific function or snippet? Describe it and let AI generate the code. * **Effortless Unit Test & Docstring Generation:** Ensure code correctness and readability with automatic unit test and docstring creation. * **Deep Code Understanding:** Gain insights into your code's functionality with AI explanations. * **AI-driven Refactoring:** Improve code quality and maintainability with suggested refactoring. * **Code Complexity Analysis:** Identify areas for improvement by understanding code complexity. * **AI-powered Bug Detection:** Find and resolve coding issues before they cause problems. * **Automatic Commit Message Generation:** Simplify version control with concise and accurate commit messages. * **Ask AI Anything:** Get instant insights and explanations about your code. **Feedback and Support** Spec Coder values user feedback. Share your suggestions, bug reports, or questions on their GitHub repository: [https://github.com/qirolab/spec-coder-issues](https://github.com/qirolab/spec-coder-issues) **The Future of Coding is Here** Spec Coder empowers developers to push beyond traditional coding limitations. Its cutting-edge AI features unlock new levels of productivity and creativity. **Ready to supercharge your coding? [Download Spec Coder](https://qirolab.com/spec-coder) today and experience the future of software development!** **Happy coding! ** [![Spec Coder](https://i.imgur.com/lqkt7a3.png)](https://qirolab.com/spec-coder)
hkp22
1,875,544
Before Cloud Computing
Before cloud computing, companies had many difficulties regarding infrastructure and management, for...
0
2024-06-03T14:45:35
https://dev.to/leonardosantosbr/before-cloud-computing-12j3
beginners, learning, cloudcomputing
Before cloud computing, companies had many difficulties regarding infrastructure and management, for example: - **High infrastructure costs** Companies needed to invest a lot of capital in servers, data centers, hardware and software and also in the maintenance of this equipment and occasionally replacement. _data centers_: buildings or physical installation that houses IT infrastructure for development - **Lack of scalability** Before meeting growing demands was difficult and very expensive, adding servers or increasing storage capacity required weeks or months or significant investments - **Maintenance Complexity** Data center maintenance required specialized teams to manage hardware and software, these employees had to be physically present on site or unreliable remote access solutions and this took a lot of time. - **Limited Backup and Recovery Capacity** Local backups are subject to physical defects and local disasters, such as fires and heavy rains, which could cause irreparable data loss.
leonardosantosbr
1,875,541
Create a Blazing Fast React App with Vite
Vite is a modern build tool that's taking the React development world by storm. Known for its...
27,428
2024-06-03T14:43:02
https://dev.to/ellis22/create-a-blazing-fast-react-app-with-vite-e01
vite, webdev, javascript, react
Vite is a modern build tool that's taking the React development world by storm. Known for its lightning-fast development server and streamlined workflow, Vite offers a significant improvement over traditional tools like Create React App. In this article, we'll guide you through creating a React.js application using Vite, highlighting its key benefits and getting you started in no time. {% youtube M1GS1SaxAiY %} 👉 **[Download eBook - JavaScript: from ES2015 to ES2023](https://qirolab.gumroad.com/l/javascript-from-es2015-to-es2023)** . ### Why Choose Vite for React Development? Here are some compelling reasons to choose Vite for your next React project: * **Super-fast Hot Module Replacement (HMR):** Vite boasts blazing-fast HMR, allowing you to see code changes reflected in the browser almost instantly. This significantly improves your development experience, reducing waiting times and keeping you in a productive flow. * **Simplified Development Server:** Gone are the days of complex build configurations. Vite uses native ES modules, eliminating the need for a bundler during development. This translates to a simpler setup and faster server restarts. * **Seamless Production Builds:** When it's time to deploy your React app, Vite leverages Rollup to create highly optimized production builds. This ensures your app runs smoothly even on slower devices. ### Getting Started with Vite and React Let's create a brand new React app using Vite. Here's what you'll need: * Node.js (version 14.18+ or 16+) installed on your system. Once you have Node.js set up, open your terminal and navigate to the directory where you want to create your React project. Now, run the following command: ```bash npm create vite@latest ``` This command will prompt you for a few details, such as your project name and preferred framework (choose React in this case). Vite will then handle the setup and install all the necessary dependencies. After the setup is complete, navigate to your project directory using the `cd` command and run: ```bash npm install ``` This command installs the project dependencies. Finally, start the development server using: ```bash npm run dev ``` Vite will launch the development server, typically at `http://localhost:5173` (the port number might vary). Open this URL in your browser, and you should see the default React app running. ### Exploring Your React Project with Vite Vite creates a well-structured project directory. The key file you'll be working with is `src/App.jsx`. This file contains the main React component for your application. You can modify this file to create your React components and build your application's user interface. Vite offers a hot module replacement feature, so any changes you make to your React components will be reflected in the browser almost instantly, eliminating the need to manually refresh the page. This allows for a much faster development cycle. ### Next Steps This article provides a basic overview of [creating a React app with Vite](https://www.qirolab.com/posts/how-to-create-a-reactjs-app-using-vite-a-step-by-step-guide-1717417504). With the development server up and running, you're now ready to dive into building your React application. There are plenty of resources available online to help you learn more about React and Vite. Vite offers a refreshing and streamlined development experience for React developers. With its blazing-fast performance and simplified workflow, Vite is a compelling alternative to traditional build tools and is well worth considering for your next React project. 👉 **[Download eBook](https://qirolab.gumroad.com/l/javascript-from-es2015-to-es2023)** [![javascript-from-es2015-to-es2023](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87ps51j5doddmsulmay4.png)](https://qirolab.gumroad.com/l/javascript-from-es2015-to-es2023)
ellis22
1,875,540
Despliegue de aplicación de Django con Github Actions para un servidor propio
Para proyectos en donde la actualización del código es algo recurrente, es ideal realizar un proceso...
0
2024-06-03T14:42:30
https://dev.to/josemiguelsandoval/despliegue-de-aplicacion-de-django-con-github-actions-para-un-servidor-propio-5974
githubactions, django, deploy, cicd
Para proyectos en donde la actualización del código es algo recurrente, es ideal realizar un proceso de automatización del despliegue del proyecto, además de asegurarse que este se subió correctamente. Esto se conoce como CI/CD (Integración continua/Despliegue continuo). Hay varias formas de realizar este proceso para una aplicación de Django, por ejemplo se pueden utilizar contenedores de Docker, automatización con SSH, usando herramientas como Jenkins, etc. Sin embargo, para esta publicación se utilizará la herramienta de Github Actions para realizar este proceso. Para desplegar este proyecto se utilizará también: Nginx como servidor web/Proxy, Supervisor que es un controlador de procesos, Gunicorn para servir la app y Postgresql que será el gestor de la base de datos. Esta publicación es un ejemplo básico para subir una aplicación de Django usando Github Actions junto con un testeo previo, para verificar que el código este correctamente configurado antes de realizar el deploy. ## Paso 1. Creación de un usuario en el servidor Para utilizar Github Actions en Ubuntu, se debe utilizar un usuario que no sea el usuario root, si ya tienes otro usuario puedes saltar este paso. Se creará el usuario `ubuntu`, para esto ingresamos al servidor que contratamos y como usuario root ejecutamos el siguiente comando: ```bash sudo adduser ubuntu ``` ### Paso 1.1: Se agrega al usuario dentro del grupo sudo ```bash sudo usermod -aG sudo ubuntu ``` ### Paso 1.2 (opcional): Configurar sudo para que no necesite contraseña Solo en caso que se requiera que el nuevo usuario no requiera contraseña al ejecutar sudo, se puede editar el archivo de configuración de sudo: ```bash sudo visudo ``` En el final del archivo se debe agregar la siguiente línea: ``` ubuntu ALL=(ALL) NOPASSWD:ALL ``` ### Paso 1.3: Ingresar con el nuevo usuario Ahora puedes ingresar al servidor mediante SSH con el nuevo usuario creado. ## Paso 2. Hacer las instalaciones necesarias en el servidor Primero actualizaremos el servidor y todos sus paquetes ejecutando el siguiente comando: ```bash sudo apt-get update sudo apt-get upgrade ``` Una vez actualizado el servidor, instalaremos los paquetes necesarios para realizar el despliegue ```bash sudo apt install python3-pip python3-venv nginx git supervisor nano postgresql postgresql-contrib ``` ## Paso 3: Creación de la base de datos Para manejar la base de datos del proyecto, usaremos PostgreSQL. ### Paso 3.1: Ingresar a la terminal de Postgres Para ingresar a la terminal de Postgres sin necesidad de cambiar de usuario, se puede ejecutar el siguiente comando: ```bash sudo -u postgres psql ``` Este comando nos abrirá la terminal de Postgres y veremos algo así: ```postgres postgres=# ``` ### Paso 3.2: Creación de la base de datos y usuario configurado en Django Una vez dentro de la terminal de Postgres, primero crearemos el usuario, se debe reemplazar "username" y "password" por la configuración que tienes en tu proyecto: ```sql CREATE USER username WITH PASSWORD 'password'; ``` Ahora creamos la base de datos: ```sql CREATE DATABASE db_name; ``` Para asignarles privilegios un usuario para el uso de la bbdd: ```sql GRANT ALL PRIVILEGES ON DATABASE db_name TO username; ``` Para establecer un usuario como propietario de la bbdd: ```sql ALTER DATABASE db_name OWNER TO username; ``` Para salir de la terminal de postgres se ejecuta lo siguiente: ```sql \q ``` ## Paso 4: Realizar la configuración de Github Actions Runner para Self Host ### Paso 4.1: Crear Runner en Github: Dentro de la sección `"Settings"` del repositorio de nuestro proyecto ![Seccion settings github](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vbtbpfu8lmsl0m5rrzfd.png) se debe ir a Actions y luego a Runners. ![Actions runners](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmktqm6d9yvt2mjqh2xt.png) Una vez en Runners, se debe hacer clic en el botón `New self-hosted runner`. Se abrirá una nueva página con las instrucciones necesarias para configurar el servidor linux. ![Eleccion de servidor linux](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhfwearo3qmjxqu0sotu.png) ### Paso 4.2: Realizar la configuración en el servidor Copiamos los comandos de la sección `Download` uno por uno y lo vamos ejecutando en el servidor siguiendo las instrucciones que aparece en pantalla. Si se quiere manejar los nombres por default, se debe apretar enter cuando al ejecutar un comando se pida algún texto. Copiamos el primer comando de la sección `Configure` para crear el Runner. Si todo sale bien hasta este punto, puedes ejecutar los siguientes comandos para indicarle al runner que mantenga corriendo el servicio: ```bash sudo ./svc.sh install sudo ./svc.sh start sudo ./svc.sh status ``` Finalmente para verificar que todo esta correcto se puede ejecutar: ```bash ./run.sh ``` Se debiese imprimir en la consola que la conexión con Gihtub es un éxito, para salir del run se puede apretar `ctrl + c`. ## Paso 5: Configuración de Supervisor Primero crearemos el archivo de configuración de nuestro servicio gunicorn, para esto nos movemos al directorio conf.d dentro de la carpeta de supervisor: ```bash cd /etc/supervisor/conf.d ``` Una vez dentro, creamos el archivo gunicorn.conf que contendrá las instrucciones que debe servir supervisor: ```bash sudo nano gunicorn.conf ``` A continuación mostraré un ejemplo de lo que debe estar escrito dentro del archivo. Lo importante es verificar que las rutas estén escritas correctamente, para esto debes cambiar `{name_repository}`por el nombre de tu repositorio y `myproject` por el nombre de tu proyecto django: ``` [program:gunicorn] directory=/home/ubuntu/actions-runner/_work/name_repository/name_repository command=/home/ubuntu/actions-runner/_work/name_repository/name_repository/venv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/app.sock myproject.wsgi:application autostart=true autorestart=true stderr_logfile=/var/log/gunicorn/gunicorn.err.log stdout_logfile=/var/log/gunicorn/gunicorn.out.log [group:gunicorn] programs:gunicorn ``` Para guardar los cambios apretamos `ctrl + o`, y para volver a la terminal `ctrl + x`. Ahora creamos el directorio donde se escribirán los logs de gunicorn: ```bash sudo mkdir /var/log/gunicorn ``` Para que supervisor lea el archivo de configuración recién creado, se lo indicamos con el siguiente comando: ```bash sudo supervisorctl reread ``` Nos debería aparecer la siguiente respuesta ```bash gunicorn: available ``` Por último, para que supervisor ejecute los cambios recién leidos, ejecutamos: ```bash sudo supervisorctl update ``` Al verificar si el servicio gunicorn esta corriendo correctamente al ejecutar el siguiente comando: ```bash sudo supervisorctl status ``` Debiese aparecer un error `FATAL` con la descripción: `can't find command '/home/ubuntu/actions-runner/_work/name_repository/name_repository/venv/bin/gunicorn'`, ya que aun no hemos subido el código a nuestro servidor, eso lo haremos a continuación. ## Paso 6: Configuración del workflow de Github Actions Para este caso se mostrará un ejemplo de como realizar el workflow separandolo en 2 pasos: 1. Realizar tests para verificar que el código se suba correctamente. 2. Si los tests pasaron correctamente, se subirá el código en el servidor. ### Paso 6.1: Creación del archivo YAML de Github Actions En la raíz del repositorio de github, crearemos la carpeta `.github` y dentro de esta la carpeta `workflows`. Dentro de esta última carpeta crearemos un archivo con la extensión `.yml`, por ejemplo `ci-cd.yml`. Por lo que, el repositorio debiese tener una estructura básica como se muestra a continuación: ``` . ├── .github │ └── workflows │ └── ci-cd.yml ├── myproject │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── manage.py ├── requirements.txt │ ... ``` Dentro del archivo `ci-cd.yml` escribimos las instrucciones que necesita Github Actions para automatizar las tareas: ```yml name: CI CD Django on: push: branches: - main jobs: test: runs-on: ubuntu-latest # Puedes utilizar un runner predeterminado de GitHub Actions para los tests services: postgres: image: postgres:13 env: POSTGRES_DB: db_name POSTGRES_USER: username POSTGRES_PASSWORD: password ports: - 5432:5432 strategy: max-parallel: 4 matrix: python-version: [3.10.12] steps: - name: Checkout uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: Install dependencies run: | python -m venv venv source venv/bin/activate pip install -r requirements.txt - name: Run migrations run: | source venv/bin/activate python manage.py migrate - name: Run tests run: | source venv/bin/activate python manage.py test deploy: needs: test # Este trabajo solo se ejecuta si el trabajo 'test' pasa runs-on: self-hosted strategy: max-parallel: 4 matrix: python-version: [3.10.12] steps: - name: Checkout main branch uses: actions/checkout@v3 with: clean: false - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: Install dependencies # Se instala gunicorn pero debiese estar dentro de requiriments.txt run: | python -m venv venv source venv/bin/activate pip install -r requirements.txt pip install gunicorn - name: Run migrations run: | source venv/bin/activate python manage.py migrate - name: Reset supervisor run: sudo service supervisor restart ``` ### Paso 6.2: Subir el archivo YML a Github para iniciar Github Actions Hacemos un commit y un push de los cambios para subir nuestra configuración. Al realizar un push, Github inmediatamente reconocerá el workflow y comenzará a realizar las tareas que se indicaron. Esto lo puedes visualizar dentro de la sección `Actions` en tu repositorio de Github. ![Seccion actions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vhh24khyjhqs9kne1kr.png) En caso que la operación sea un éxito, se mostrará un check verde al lado del nombre de la tarea, en caso contrario, aparecerá una x con color rojo. ### Paso 6.3: Verificamos que el proyecto se subió al servidor En caso que la operación haya resultado exitosa, podemos ver que dentro de la carpeta `actions-runner` se creo una carpeta llamada `_work`, esta carpeta contiene nuestro repositorio, para este caso `name_repository`, por lo tanto el directorio de la aplicación de Django debiese estar en la siguiente altura: ``` cd /home/ubuntu/actions-runner/_work/name_repository/name_repository ``` Dentro de esta carpeta podemos encontrar la carpeta `venv` que contendrá el entorno virtual del proyecto, por lo que podemos activarla y generar comandos con `managa.py` como por ejemplo, crear un superusario. Ahora podemos verificar si Supervisor reconoció el proyecto y se encuentra corriedo, para esto ejecutamos: ```bash sudo supervisorctl status ``` Si aparece que el servicio esta corriendo `RUNNING`, es porque el archivo y el código se encuentran configuradas correctamente. En caso de error, verifica las rutas dentro del archivo gunicorn.conf ## Paso 7: Configuración de Nginx ### Paso 7.1: Editar nginx.conf Para configurar Nginx, primero debemos modificar el usuario que tiene configurado por defecto, para esto nos movemos dentro de la carpeta de nginx: ```bash cd /etc/nginx ``` Una vez ahí, editamos el archivo nginx.conf de la siguiente forma: ```bash sudo nano nginx.conf ``` Al principio del archivo, aparecerá la siguiente instrucción: ``` user www-data; # resto de la configuracion ``` Borramos la parte que dice www-data y la cambiamos por root: ``` user root; # resto de la configuracion ``` Guardamos los cambios con `ctrl + o` seguido de `ctrl + x`. ### Paso 7.2: Configurar Nginx para subir nuestro proyecto Para configurar Nginx e indicarle que tiene que desplegar nuestro proyecto, nos vamos a la siguiente ruta: ```bash cd /etc/nginx/sites-available ``` Una vez aquí, creamos un arhivo de configuración, que contendra todas las instrucciones necesarias para que nginx sirva nuestra aplicación, el nombre del archivo puede ser cualquiera pero con extensión `.conf`. Ejemplo: ```bash sudo nano django.conf ``` A continuación, mostraré un ejemplo básico de como configurar el archivo, pero pueden haber muchas más configuraciones para personalizar el funcionamiento de Nginx. Ejemplo: ```nginx server{ listen 80; server_name 198.23.227.182; # Aqui va el dominio. Ejemplo: midominio.com o una dirección IP (no recomendado) location / { include proxy_params; proxy_pass http://unix:/home/ubuntu/app.sock; } location /static { alias /home/ubuntu/actions-runner/_work/name_repository/name_repository/static; } location /media { alias /home/ubuntu/actions-runner/_work/name_repository/name_repository/media; } } ``` Deberías verificar que las rutas a los archivos estáticos y de media sea correcta cambiando `name_repository`, por el nombre de tu repositorio. El dominio indicado, debe estar configurado dentro del archivo settings.py de Django en ALLOWED_HOSTS. Ejemplo: ```python ALLOWED_HOSTS = ['*'] # El * permite cualquier HOST (no recomendado). Aquí debiese ir tu dominio ``` Para verificar que la sintaxis del archivo este escrita correctamente, ejecutamos: ```bash sudo nginx -t ``` Debiese aparecer un mensaje como el siguiente: ```bash nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful ``` Ahora ejecutamos el siguiente comando para crear un enlace simbólico que referenciara al archivo de configuración recién creado: ```bash sudo ln /etc/nginx/sites-available/django.conf /etc/nginx/sites-enabled ``` Reiniciamos el servicio de nginx: ```bash sudo service nginx restart ``` Y si ahora buscamos el dominio ingresado dentro de nuestro navegador, debiésemos ver nuestro proyecto desplegado. ## Conclusiones Siguiendo estos pasos, podrás tener tu proyecto de Django desplegado usando las herramientas mencionadas en este artículo. Hay varias formas de realizar un despliegue usando Github Actions y Django, lo importante es realizar testeos previos a realizar el deploy de tu proyecto, para que no existan errores en producción. Espero que te haya sido de utilidad :) Cualquier duda, consulta o sugerencia, házmela llegar a través de los comentarios.
josemiguelsandoval
1,875,539
Preventing account sharing with Clerk
Scenario Imagine the following scenario: When User B logs in using User A's credentials...
0
2024-06-03T14:40:28
https://dev.to/shunyuan/preventing-account-sharing-with-clerk-47pm
webdev, javascript, tutorial, clerk
# Scenario Imagine the following scenario: > When User B logs in using User A's credentials from a different device, the platform invalidates User A's active session, logs them out, and allows User B to access the account to prevent account sharing. # The Problem While Clerk provides a way to invalidate a user session, there aren't any docs or guides on achieving the said scenario. Thus, I've written this guide to illustrate the process. # The Guide ## Before we continue ### Disclaimer This walkthrough will not cover how Clerk was set up with TRPC & NextJS 14 as seen in the example repo. It'll only go through the core logic/implementation of preventing account sharing. ### Tech Stack This guide uses the following tech stack: 1. [Create T3 APP](https://github.com/t3-oss/create-t3-app) * NextJS 14 app directory * TRPC 11 2. Pusher Channels * We'll be using the free tier which allows for 200k messages per day & 100 concurrent connection. 3. Clerk While it's using bleeding edge tech, I believe the concept described in the guide can be used on a traditional client-server architecture as well. ## Folder Structure ```md src ├── app │ ├── (auth) │ │ ├── layout.tsx │ │ └── sign-in │ │ └── [[...sign-in]] │ │ └── page.tsx │ ├── (internal) │ │ └── app │ │ ├── dashboard │ │ │ └── page.tsx │ │ └── layout.tsx │ ├── api │ │ └── pusher │ │ └── auth │ │ └── route.ts │ └── layout.tsx ├── hooks │ └── use-setup-session-management.ts ├── lib │ └── pusher │ ├── client.ts │ └── server.ts ├── server │ └── api │ └── routers │ └── user.ts └── utils ├── app-provider.tsx └── route-paths.ts ``` ## Implementation The full code is right [here](https://github.com/tanshunyuan/blog-session-invalidation/) ### Configuring a web socket service on your application With a pusher account, follow it's JS SDK [walkthrough](https://pusher.com/docs/channels/getting_started/javascript/)to retrieve the environment variables from your account. Now we need to setup a client and server pusher instance as follows: **Client** ```ts // src/lib/pusher/client.ts import PusherClient from "pusher-js"; import { env } from "~/env.js"; const PUSHER_AUTH_ENDPOINT = "/api/pusher/auth"; export const pusherClient = new PusherClient(env.NEXT_PUBLIC_PUSHER_KEY, { cluster: env.NEXT_PUBLIC_PUSHER_CLUSTER, authEndpoint: PUSHER_AUTH_ENDPOINT, }); ``` Notes * `PUSHER_AUTH_ENDPOINT` is an authentication endpoint that will be called on the client side to authenticate the outgoing request to the server. **Server** ```ts // src/lib/pusher/server.ts import PusherServer from "pusher"; import { env } from "~/env.js"; let pusherInstance: PusherServer | null = null; export const getPusherInstance = () => { if (!pusherInstance) { pusherInstance = new PusherServer({ appId: env.PUSHER_APP_ID, key: env.NEXT_PUBLIC_PUSHER_KEY, secret: env.PUSHER_SECRET, cluster: env.NEXT_PUBLIC_PUSHER_CLUSTER, useTLS: true, }); } return pusherInstance; }; ``` **Authenticating Client Pusher Request on the server** ```ts // src/app/api/pusher/auth/route.ts import { getPusherInstance } from "~/lib/pusher/server"; const pusherServer = getPusherInstance(); export async function POST(req: Request) { // see https://pusher.com/docs/channels/server_api/authenticating-users const data = await req.text(); const [socket_id, channel_name] = data .split("&") .map((str) => str.split("=")[1]); // use JWTs here to authenticate users before continuing try { const auth = pusherServer.authorizeChannel(socket_id!, channel_name!); return new Response(JSON.stringify(auth)); } catch (error) { console.error("pusher/auth.handler.catch", { details: error }); } } ``` ## Establish a client web socket connection **Client: Creating a hook** ```ts // src/hooks/use-setup-session-management.ts import { useUser } from "@clerk/nextjs"; import { pusherClient } from "~/lib/pusher/client"; import { useState } from "react"; let didInit = false; export const useSubscribeToSessionChannel = () => { const { user } = useUser(); useEffect(() => { if (!didInit) { const channel = pusherClient .subscribe("private-session") .bind( `evt::revoke-${user?.id}`, (data: { type: string; data: string[] }) => { if (data.type === "session-revoked") { // handle session removal } }, ); didInit = true; return () => { channel.unbind(); didInit = false; }; } }, [user, handleSessionRemoval]); }; ``` Notes: * `didInit` * Ensures that the hook only run once when mounted. More [here](https://react.dev/learn/you-might-not-need-an-effect#initializing-the-application) * `pusherClient.subscribe('private-session')` * `private-session` is the channel name * `pusherClient.subscribe('private-session').bind('evt::revoke-${user?.id})` * `evt::revoke-${user?.id}` is the event name to be triggered within the channel. **Client: Mounting it in the protected directory provider** ```tsx // src/utils/app-provider.tsx 'use client' import { useSubscribeToSessionChannel } from "~/hooks/use-setup-session-management"; import { type BaseChildrenProps } from "~/types/common"; export const AppProvider = (props: BaseChildrenProps) => { const { children } = props useSubscribeToSessionChannel(); return <> {children} </> } ``` Notes: * By putting it in a provider it means that every time the user visits the main application. This connection will be setup ## Query for extra session on the server side **Server** ```tsx // src/server/api/routers/user.ts import { clerkClient } from "@clerk/nextjs/server"; import { createTRPCRouter, protectedProcedure } from "../trpc"; import { getPusherInstance } from '~/lib/pusher/server'; const pusherServer = getPusherInstance(); export const userRouter = createTRPCRouter({ getExcessSessions: protectedProcedure.query(async ({ ctx }) => { const { userId, sessionId: currentSessionId } = ctx.auth; const { data: activeSessions } = await clerkClient.sessions.getSessionList({ userId, status: "active", }); if (activeSessions.length <= 1) return null; const excessSessionsIds = activeSessions .filter((session) => session.id !== currentSessionId) .map((session) => session.id); const revokeSessionsPromises = excessSessionsIds.map((sessionId) => clerkClient.sessions.revokeSession(sessionId), ); try { await Promise.all(revokeSessionsPromises).then(async () => { await pusherServer .trigger("private-session", `evt::revoke-${userId}`, { type: "session-revoked", data: excessSessionsIds, }) }); } catch (error) { console.error(error); } finally { return {}; } }), }) ``` Notes * Ensure both `channelName` & `eventName` for pusher is the same as the one on the client side **Client** ```tsx // src/utils/app-provider.tsx 'use client' import { useSubscribeToSessionChannel } from "~/hooks/use-setup-session-management"; import { api } from "~/trpc/react"; import { type BaseChildrenProps } from "~/types/common"; export const AppProvider = (props: BaseChildrenProps) => { const { children } = props useSubscribeToSessionChannel(); const excessSessionQuery = api.user.getExcessSessions.useQuery() if (excessSessionQuery.isLoading) return <p>Loading...</p> return <> {children} </> } ``` ## Sign user out based on extra sessions We're going to handle the information from the callback in the web socket connection We'd need to define how to handle the session removal when the web socket receives an item: ```tsx // src/hooks/use-setup-session-management.ts const useHandleSignOut = () => { const { signOut } = useClerk(); const router = useRouter(); return async (currentSessionId: string) => { await signOut(() => { router.push(`${ROUTE_PATHS.SIGNIN}?forcedRedirect=true`); }, { sessionId: currentSessionId }) } }; const useHandleSessionRemoval = () => { const { session: currentSession } = useClerk(); const handleSignOut = useHandleSignOut(); return async (excessSessionIds: string[]) => { try { const hasExcess = excessSessionIds.length > 0; const isCurrentSessionExcess = hasExcess && currentSession && excessSessionIds.includes(currentSession.id); if (!isCurrentSessionExcess) return; await handleSignOut(currentSession.id); } catch (error) { console.error('Error removing session:', error); } }; }; ``` * Using `use` as a keyword to prevent typescript from complaining Refer to [here](https://github.com/tanshunyuan/blog-prevent-account-sharing/blob/main/src/hooks/use-setup-session-management.ts) for the full code. ## Touching up the UI After the user is signed out, they'll be redirected to the sign in page. At this point we'd want to notify the user that they've been logged out. Recall this is how we redirected the user ```tsx router.push(`${ROUTE_PATHS.SIGNIN}?forcedRedirect=true`); ``` Notice the query parameter of `forcedRedirect=true`, this is used to trigger a toast to indicate the user they've been logged out on the sign in page. Here's how: ```tsx // src/app/(auth)/sign-in/[[...sign-in]]/page.tsx "use client"; import { SignIn } from "@clerk/nextjs"; import { useSearchParams, useRouter } from "next/navigation"; import { useEffect } from "react"; import { ROUTE_PATHS } from "~/utils/route-paths"; import toast from "react-hot-toast"; export default function SignInPage() { const router = useRouter(); const searchParams = useSearchParams(); const forcedRedirect = searchParams.get("forcedRedirect"); useEffect(() => { if (forcedRedirect) { toast.error('Detected additional sessions, kicking you out'); router.replace(ROUTE_PATHS.SIGNIN, undefined); } }, [forcedRedirect]); return ( <SignIn /> ); } ``` Notes * We detect if there's a searchParam called `forcedRedirect` * If there is we display a toast telling the user that they've been logged out * We also perform a `router.replace` to remove the query param on the URL to hide the search query. # Conclusion By the end of this guide, you should have grasp on how to leverage Clerk's session management to prevent account sharing. If there's any questions or remarks, feel free to leave it in the comments! The full code to this guide is [here](https://github.com/tanshunyuan/blog-prevent-account-sharing/tree/main)
shunyuan
1,875,537
Enhancing your Python code with decorators
Introduction In this article, you'll learn about decorators in Python: how they work, why...
0
2024-06-03T14:40:08
https://dev.to/osahenru/enhancing-your-python-code-with-decorators-27j4
python, backend, webdev, learning
#### Introduction In this article, you'll learn about decorators in Python: how they work, why they are useful, and when to use them. We'll also explore some common decorators and their use cases. While I aim to explain concepts from a beginner's perspective, this article is not tailored for absolute beginners. A fundamental understanding of OOP in Python is required to fully benefit from this article. With that in mind, let's dive in. In Python and most programming languages, a decorator is a tool that allows you to modify the behavior of a function or method without changing it’s main behavior. They help add more functionality before a function is called or after it has been called. Decorators give you the ability to modify the behavior of functions without altering their main implementation. #### Creating Decorators A decorator in Python is a function that: - Accepts another function as an argument, - Defines a new function inside itself, - Returns that new function. This is the basic template for creating decorators in Python. For example, if we want to create a simple decorator function that multiplies the return value of any function by 10, we can do it like this: ```python def mul_by_ten_decorator(func): def wrapper_function(): return func() * 10 return wrapper_function # target function def demo(): return 2 result = mul_by_ten_decorator(demo) print(result()) ``` ```python john@doe:~/Desktop$ python3 main.py output: 20 ``` ##### Notes📝 > We have the mul_by_ten_decorator that takes a function as an argument. Inside, the wrapper_function calls func() (i.e., the passed function) and multiplies the result by 10. > Moving to the demo function, which we use to test our decorator, it simply returns 2. The mul_by_ten_decorator wraps the demo function, so when demo is passed as an argument, the decorator multiplies its return value by 10. Calling result() is equivalent to calling wrapper_function(), which returns 20. This value is then printed to the screen when we run the code from the terminal. We can further simplify our code by introducing the `@` symbol #### Introducing `@` symbol in decorators ```python def mul_by_ten_decorator(func): def wrapper_function(): return func() * 10 return wrapper_function # target function @mul_by_ten_decorator def demo(): return 2 result = demo() print(result) ``` ```python john@doe:~/Desktop$ python3 main.py output: 20 ``` ##### Notes📝 > When we introduce the decorator symbol @, we can wrap our target function with our decorator function, making our code more encapsulated. By simply calling the demo() function, it is similar to calling the wrapper_function(), which internally calls func() (the demo function). The demo function returns 2, which is then multiplied by 10. The wrapper_function returns 20, which is assigned to result and printed out. #### Passing Arguments to Wrapper Functions Sometimes, functions can take multiple arguments. If we modify our demo function to take two arguments, a and b, and return their product multiplied by 2, it would look like this: ```python def mul_by_ten_decorator(func): def wrapper_function(): return func() * 10 return wrapper_function @mul_by_ten_decorator def demo(a, b): return 2 * (a * b) result = demo(4, 5) print(result) ``` When we run our code, we encounter a type error. ```python john@doe:~/Desktop$ python3 demo.py TypeError: mul_by_ten.<locals>.wrapper() takes 0 positional arguments but 2 were given ``` The error message implies that the wrapper_function is also expecting a number of arguments. We can modify our wrapper_function as follows: ```python def wrapper_function(a, b): return func(a, b) * 10 ``` so the entire code block can further be modified like this ```python def mul_by_ten_decorator(func): def wrapper_function(a, b): return func(a, b) * 10 return wrapper_function # target function @mul_by_ten_decorator def demo(a, b): return 2 * (a * b) result = demo(4, 5) print(result) ``` ```python john@doe:~/Desktop$ python3 main.py output: 400 ``` ##### Notes📝 > Calling demo() is the same as calling the wrapper_function(), which in turn calls func(). Therefore, we must ensure these functions accept the same number of arguments. What if we want to add 20 numbers using this same decorator? Do we need to pass 20 arguments to both the target function and the wrapper function? We cannot always define the number of arguments for both functions explicitly. Instead, we can use Python's *args and **kwargs keywords. The *args keyword allows us to pass a variable number of positional arguments to the function, while **kwargs allows us to pass a variable number of keyword arguments. We can make our code block more Pythonic by introducing these keywords. ```python def mul_by_ten_decorator(func): def wrapper_function(*args, **kwargs): return func(*args, **kwargs) * 10 return wrapper_function # target function @mul_by_ten_decorator def demo(a, b): return 2 * (a * b) result = demo(4, 5) print(result) ``` ##### Notes📝 > Now, our target function takes in two arguments, while our wrapper function accepts an arbitrary number of arguments. With a good understanding of how to use decorator functions, we can proceed to explore some use cases and the importance of using decorator functions. #### Use Case of Decorators in Python Let's examine some real-world examples where using a decorator comes in handy. ##### Logging It is often helpful to have logs of which functions are executed, along with relevant information. A logger is useful when you're trying to debug your code. Here is a simple example of how to create a logger using Python's built-in logging package. The information about the script is saved to a file named test.log: ```python import logging def function_logger(func): logging.basicConfig(level=logging.INFO, filename='test.log') def wrapper(*args, **kwargs): result = func(*args, **kwargs) logging.info(f'{func.__name__} ran with positional arguments: {args} and keyword arguments: {kwargs}. Return value: {result}') return result return wrapper # target function @function_logger def addition(a, b): return (a + b) print(addition(2, 5)) ``` ```python john@doe:~/Desktop$ python3 demo.py output: 7 ``` When you first run this code, a **test.log** file is created, which looks like this. ```python INFO:root:addition ran with positional arguments: (2, 5) and keyword arguments: {}. Return value: 7 ``` To see different log messages, we can change the values of the arguments passed, for example, **(11, 12)***. ```python INFO:root:addition ran with positional arguments: (2, 5) and keyword arguments: {}. Return value: 7 INFO:root:addition ran with positional arguments: (11, 12) and keyword arguments: {}. Return value: 23 ``` ##### Notes📝 > You can see our repeated template for creating a decorator: - The decorator takes a function as an argument. ✅ - The wrapper function calls and returns a function. ✅ > We import Python’s logging module with import logging, which includes a **BasicConfig** method that sets up the logging configuration. We pass the logging level, which can be **logging.INFO**, **logging.DEBUG**, or **logging.ERROR**. > Once we wrap this logger decorator around a function, we can get information about the function, including the arguments passed and the returned value. This can be a very useful tool for debugging and error tracking. For more about logging in Python, checkout the official documentation https://docs.python.org/3/howto/logging.html ##### Caching Caching is another use case where the knowledge of decorators comes in very handy. Caching is a technique used to store the results of expensive functions that take the same arguments and return the same value each time they are called. Instead of always recalculating the results, we can cache the process. This approach ensures that too many resources aren’t used up on such expensive functions. To implement a caching function in Python, we use the **@lru_cache** decorator from **functools**. ##### Notes📝 > LRU stands for Least Recently Used. The LRU function has a default maximum size of 128, which is the maximum number of calls to cache. Once this limit is reached, older results are discarded to make space for new ones. The Fibonacci sequence is a great example to illustrate the concept of caching because it depicts a recursive function, where calculating a Fibonacci sequence recalculates the same values multiple times. ```python from functools import lru_cache @lru_cache(maxsize=120) def fibonacci(n): if n < 2: return n return fibonacci(n-1) + fibonacci(n-2) result = fibonacci(10) print(result) ``` ```python john@doe:~/Desktop$ python3 demo.py output: 55 ``` ##### Notes📝 > We use the **lru_cache** decorator from **functools**. When we call **fibonacci(10)**, it recursively calls **fibonacci(9)**, **fibonacci(8)**, **fibonacci(7)**, and so on until it reaches 1 or 0, ultimately outputting **55** to the screen. If n is less than 1, it returns n; otherwise, it returns the sum of the function called with **n-1** and **n-2**. > When we call **fibonacci(30)** for the first time, it stops calling the Fibonacci function when it reaches **10**, since we’ve previously run **fibonacci(10)**. Similarly, when we run **fibonacci(60)** for the first time, it stops calling the Fibonacci function at **30** since we’ve previously run **fibonacci(30)**. Therefore, a cached **fibonacci()** function will execute faster compared to one that isn’t cached. We can write a script to demonstrate the time difference between a cached function and one that isn't cached. ```python import time from functools import lru_cache # Fibonacci function without caching def fibonacci_no_cache(n): if n < 2: return n return fibonacci_no_cache(n-1) + fibonacci_no_cache(n-2) # Fibonacci function with caching @lru_cache(maxsize=None) # Use LRU cache with unlimited size def fibonacci_with_cache(n): if n < 2: return n return fibonacci_with_cache(n-1) + fibonacci_with_cache(n-2) # Calculate Fibonacci(30) without caching and measure the time start_time = time.time() fibonacci_no_cache(10) no_cache_time = time.time() - start_time # Calculate Fibonacci(30) with caching and measure the time start_time = time.time() fibonacci_with_cache(10) with_cache_time = time.time() - start_time print(f"Time without cache: {no_cache_time}") print(f"Time with cache: {with_cache_time}") ``` ```python john@doe:~/Desktop$ python3 main.py Time without cache: 3.886222839355469e-05 Time with cache: 2.2172927856445312e-05 ``` ##### Notes📝 > We see the differences in time it takes to run a cached function and one that isn't: **cached 0.0000221729**, **not cached 0.0000388622**. You can see we reduce the time by almost half. Caching is particularly effective for recursive functions like Fibonacci, where the same inputs are used repeatedly. It saves time by avoiding redundant calculations, especially for large or frequently accessed values. Now that we've explored some use cases where knowledge of decorators in Python can be very handy, let's further look at some of the built-in decorators that come with Python. #### Python Decorators ##### Properties Properties are built-in Python functions for managing methods of a class. They allow you to define methods that get and set the values of attributes, providing a way to enforce rules and validation when accessing or modifying these attributes. Properties are typically used to encapsulate private attributes and control access to them. Let's see an example of how properties work in a class method: ```python class Circle: def __init__(self, radius): self.radius = radius @property def diameter(self): return 2 * self.radius circle = Circle(4) print(circle.diameter) ``` ```python john@doe:~/Desktop$ python3 main.py Output: 8 ``` We can access the diameter method as an attribute, instead of calling the function with circle.diameter(), thanks to the @property decorator. The beauty of having property decorators is how they make our code more concise. We can choose to make our diameter method stricter by ensuring the radius doesn’t take numbers less than or equal to zero, like this: ```python class Circle: def __init__(self, radius): self.radius = radius @property def diameter(self): if self. radius <= 0: raise ValueError('Only positive numbers') return 2 * self.radius circle = Circle(-3) print(circle.diameter) ``` when we run the code, a value error is raised as shown below ```python john@doe:~/Desktop$ python3 main.py raise ValueError('Only positive numbers') ValueError: Only positive numbers ``` ##### Setters and Getters A getter method is responsible for retrieving the current value of a property, decorated with the @property decorator, while a setter method is responsible for setting the value of a property. The setter method is called when the property is assigned a new value. One major flaw in how we've implemented validation above is that if we have other methods that rely on the radius being positive, this current approach might not handle invalid values properly. This can lead to subtle bugs or crashes. For example, consider a circumference method: ```python @property def circumference(self): return 2 * 3.14159 * self.radius ``` The circumference property does not currently validate whether the radius is a positive number or not. It would be redundant to validate the radius within the circumference method. Instead, we can introduce setter and getter properties for our radius. ```python class Circle: def __init__(self, radius): self.radius = radius @property def radius(self): return self._radius @radius.setter def radius(self, value): if value <= 0: raise ValueError('Positive numbers only') self._radius = value @property def diameter(self): return 2 * self.radius @property def circumference(self): return 2 * 3.14159 * self.radius circle = Circle(10) print(f'Diameter: {circle.diameter}') print(f'Circumference: {circle.circumference}') ``` ```python john@doe:~/Desktop$ python3 main.py Diameter: 20 Circumference: 62.8318 ``` ##### Notes📝 > Our getter method retrieves the current value of the radius and returns it, while our setter method decorator checks the validity of the value. If the value is valid, it is assigned to the radius **self._radius**. > Notice that in our getter and setter methods, we use **_radius** to avoid the function calling itself repeatedly. ##### Deleters > The deleter function in a Python property decorator is used to define the behavior when an attribute is deleted using the **del** statement. Like the getter and setter functions, it allows you to control access to an attribute, but specifically handles what happens when you delete the attribute. > Returning to our circle class, let’s include a deleter property that defines how the class should behave when a radius in a circle object is deleted. ```python class Circle: def __init__(self, radius): self.radius = radius @property def radius(self): return self._radius @radius.setter def radius(self, value): if value <= 0: raise ValueError('Positive numbers only') self._radius = value @radius.deleter def radius(self): print('Radius deleted') del self._radius @property def diameter(self): return 2 * self.radius @property def circumference(self): return 2 * 3.14159 * self.radius circle = Circle(10) print(f'Diameter: {circle.diameter}') print(f'Circumference: {circle.circumference}') del circle.radius ``` ```python john@doe:~/Desktop$ python3 main.py Diameter: 20 Circumference: 62.8318 Radius deleted ``` #### Class method (@classmethod) A class method is a method that is bound to the class rather than the instance of the class. It takes the class itself as its first argument (cls) instead of the instance (self). This allows the method to access and modify class state that applies across all instances of the class. A class method can be called by both the class and its instances. Let's see how to use a class method in a class by modeling a Person: ```python from datetime import datetime class Person: def __init__(self, name, age): self.name = name self.age = age @classmethod def birth_year(cls, name, year): return cls(name, datetime.today().year - year) def __str__(self): return f'Name: {self.name}, Year: {self.age}' person = Person.birth_year('Doe', 34) print(person) ``` ```python john@doe:~/Desktop$ python3 main.py Name: Doe, Year: 1990 ``` ##### Notes📝 > Notice that we can call the **birth_year()** method directly on the class without creating a Person instance, e.g., **person = Person('Doe', 34)**. Let's take a look at another example that explains the use of **@classmethod** in a code block. Assuming we have an employee record, we can use a class method to retrieve individual employee records based on the PRIMARY KEY passed. ```python import sqlite3 class Employee: def __init__(self, id, name, salary): self.id = id self.name = name self.salary = salary @classmethod def from_database(cls, id): conn = sqlite3.connect('employees.db') cursor = conn.cursor() cursor.execute('SELECT name, salary FROM employees WHERE id=?', (id,)) row = cursor.fetchone() conn.close() if row: name, salary = row return cls(id, name, salary) else: raise ValueError('Employees not found') def __str__(self): return f'{self.id} {self.name} {self.salary}' employee_1 = Employee.from_database(3) print(employee_1) ``` But before we run this code, let's create a **database.py** file that will set up our database of employees. ```python import sqlite3 conn = sqlite3.connect('employees.db') cursor = conn.cursor() cursor.execute('''CREATE TABLE IF NOT EXISTS employees (id INTEGER PRIMARY KEY, name TEXT, salary REAL)''') cursor.execute("INSERT INTO employees (id, name, salary) VALUES (1, 'Ohemaa', 10000)") cursor.execute("INSERT INTO employees (id, name, salary) VALUES (2, 'Nana', 3000)") cursor.execute("INSERT INTO employees (id, name, salary) VALUES (3, 'Kofi', 15000)") conn.commit() cursor.execute("SELECT * FROM employees") rows = cursor.fetchall() for row in rows: print(row) cursor.close() conn.close() ``` When we run **database.py** for the first time it creates an `employee.db` file as shown below ```python john@doe:~/Desktop$ python3 database.py (1, 'Ohemaa', 10000.0) (2, 'Nana', 3000.0) (3, 'Kofi', 15000.0) ``` Now, when we run our `main.py` file we can get the individual associated to the PRIMARY_KEY passed ```python john@doe:~/Desktop$ python3 database.py 3 Kofi 15000.0 ``` ##### Notes📝 > In the classmethod in our **main.py** file, ```python conn = sqlite3.connect("employees.db") cursor = conn.cursor() cursor.execute("SELECT name, salary FROM employees WHERE id=?", (id,)) row = cursor.fetchone() conn.close() ``` > We establish and close the connection with our **employees.db** and then unpack the values of the **row** into **name** and **salary**. This allows us to use a classmethod without creating an employee instance to access an employee record. If we were to implement this using an instance method, it would require us to create an empty employee object first, like **employee = Employee(0, '', '')**, before fetching the record, which might seem a bit too complex. #### Static method (@staticmethod) In a static method, you don’t need to pass an explicit first argument like **cls** in a classmethod or **self** in an instance method. A static method is also bound to a class and somewhat behaves like a class method. The major differences are: 1. A static method doesn’t take an instance of the class as its first argument, unlike a classmethod or instance method. 2. A static method cannot modify the state of the class. 3. It is primarily used as a utility function that does not depend on the state of the class or its instances. 4. A static method is defined using the **@staticmethod** decorator. A staticmethod is most suitable in scenarios where you need to perform a function on a class without keeping a record of the instances of the class, as shown below: ```python class Calculator: @staticmethod def add(a,b): return a + b @staticmethod def multiply(a, b): return a * b print(f'Addition: {Calculator.add(11, 15)}') print(f'Multiply: {Calculator.multiply(18, 15)}') print() # you can also create a calculator instance results = Calculator() print(f'Addition: {results.add(21, 13)}') print(f'Multiply: {results.multiply(20, 10)}') ``` ```python john@doe:~/Desktop$ python3 main.py Addition: 26 Multiply: 270 Addition: 34 Multiply: 200 ``` You notice we do not need to rely on a class instance to use a static method. Let's take a look at another example that calculates the average salary of employees in a company. ```python class Company: @staticmethod def average_salary(employees): result = sum(employees.values())/ len(employees) return int(result) data = { "Alice": 50000, "Bob": 60000, "Charlie": 70000 } print(f'Average salary: {Company.average_salary(data)}') ``` ```python john@doe:~/Desktop$ python3 main.py Average salary: 60000 ``` You see that the staticmethod doesn’t need to know or modify anything in the class other than having access to the class name. In summary, static methods are within a class and do not need access to the class (no self or cls keyword). They cannot change or look at any object attributes or call other methods within the class. Static methods are mostly suitable as helper or utility functions that are relevant to the class but do not need to access or modify class or instance data. #### Abstract method (@abstractmethod) Lastly, let’s take a look at abstractmethod in Python. An abstract class acts as an interface for other subclasses, serving as a blueprint and forcing all subclasses to implement all of its abstract methods. An abstract base class cannot be instantiated directly. Python does not provide abstract classes natively, but rather comes with a module that provides the base for defining Abstract Base Classes (ABC). You use an @abstractmethod when all children of a subclass are required to have the same method as the inherited abstract class. Like the example below ```python from abc import ABC, abstractmethod class Employee(ABC): @abstractmethod def calculate_salary(self): pass class FullTimeEmployee(Employee): def calculate_salary(self): return "Calculating salary for full-time employee" class PartTimeEmployee(Employee): def calculate_salary(self): return "Calculating salary for part-time employee" class ContractEmployee(Employee): def calculate_salary(self): return "Calculating salary for contract employee" ``` ##### Notes📝 > Employee(ABC) defines an abstract class, which cannot be instantiated. If a class inherits from an abstract class, it must implement all the abstract methods defined in the parent abstract class. Otherwise, it will also be considered an abstract class and cannot be instantiated. ##### Abstract property Just like with abstractmethod, an abstract property must also be implemented in any subclass. This allows you to specify that a subclass must include a property with a getter (and optionally a setter) method. ```python from abc import ABC, abstractmethod class Employee(ABC): @property @abstractmethod def salary(self): """The salary property must be implemented by all subclasses""" pass class FullTimeEmployee(Employee): def __init__(self, base_salary): self.base_salary = base_salary @property def salary(self): return self.base_salary class PartTimeEmployee(Employee): def __init__(self, hourly_rate, hours_worked): self.hourly_rate = hourly_rate self.hours_worked = hours_worked @property def salary(self): return self.hourly_rate * self.hours_worked # Example usage full_time = FullTimeEmployee(50000) part_time = PartTimeEmployee(20, 1000) print(f'Full_time Salary: {full_time.salary}') print(f'Part_time Salary: {part_time.salary}') ``` ##### Notes📝 > Every child class (FullTimeEmployee and PartTimeEmployee) has the salary property, although with different implementations. One takes in one argument and the other takes two. When we run the code, we get the following output: ```python john@doe:~/Desktop$ python3 main.py Full_time Salary: 50000 Part_time Salary: 20000 ``` > FullTimeEmployee and PartTimeEmployee are concrete subclasses that provide specific implementations for the salary property. The salary methods in both subclasses are concrete methods. Concrete methods are methods that have a complete implementation within a class. They contain actual code that defines what the method does, as opposed to abstract methods, which only declare the method's signature without providing an implementation. #### Conclusion An understanding of decorators can help you write cleaner, more maintainable, and reusable code. Decorators are a flexible and readable way to modify the behavior of functions and methods. They are useful for a variety of tasks such as logging, authorization, and caching.
osahenru
1,875,535
YouTube Video Transcripts Using LangChain
This post demonstrates how to use the LangChain library to load and save the transcript of a YouTube...
0
2024-06-03T14:37:17
https://dev.to/samagra07/youtube-video-transcripts-using-langchain-25g4
programming, python, machinelearning, ai
This post demonstrates how to use the LangChain library to load and save the transcript of a YouTube video. The python script retrieves the video's transcript, prints it, and writes the content to a text file for further use. let's go through the code line by line: ```python from langchain.document_loaders import youtube ``` - This line imports the `youtube` module from the `langchain.document_loaders` package. This module is responsible for handling YouTube-related document loading functionalities. ```python import io ``` - This line imports the `io` module from Python's standard library, which provides tools for working with streams and I/O operations. ```python loader = youtube.YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=3OvmwM61vJw") ``` - This line creates an instance of `YoutubeLoader` by calling the `from_youtube_url` class method. The method takes a YouTube URL as an argument and initializes the `loader` object to handle the video at the specified URL. ```python docs = loader.load() ``` - This line calls the `load` method on the `loader` object. This method retrieves the document(s) (in this case, probably the transcript or other related data) from the YouTube video and stores them in the `docs` variable. `docs` is expected to be a list of document objects. ```python print(docs) ``` - This line prints the `docs` variable to the console. This helps in debugging or understanding what data has been loaded from the YouTube video. ```python with io.open("transcript.txt", "w", encoding="utf-8") as f: ``` - This line opens a file named `transcript.txt` in write mode with UTF-8 encoding. The `with` statement ensures that the file is properly opened and will be automatically closed after the indented block of code is executed. The file object is assigned to the variable `f`. ```python for doc in docs: ``` - This line starts a for loop that iterates over each document object in the `docs` list. ```python f.write(doc.page_content) ``` - Within the loop, this line writes the `page_content` attribute of each document object to the file `f`. This attribute likely contains the text content of the document (such as the transcript of the YouTube video). ```python f.close() ``` - This line closes the file `f`. However, since the file was opened using the `with` statement, it will be closed automatically even if this line is omitted. Including it is redundant but does not cause any issues. ## Summary This code loads the transcript of a YouTube video, prints the loaded documents to the console, and writes the content of these documents to a file named `transcript.txt`.
samagra07
1,875,532
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-06-03T14:27:40
https://dev.to/saswqri9/buy-verified-cash-app-account-jgb
webdev, javascript, beginners, programming
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g8a4j73wq0taf829l17p.png)\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com"
saswqri9
1,875,531
What is JSX?
JSX, or JavaScript XML, is a syntax extension for JavaScript that allows developers to write...
0
2024-06-03T14:26:37
https://dev.to/ark7/what-is-jsx-5f60
webdev, javascript, beginners, programming
JSX, or JavaScript XML, is a syntax extension for JavaScript that allows developers to write HTML-like code within JavaScript. It's primarily used with React, a popular JavaScript library for building user interfaces. JSX makes writing React components more concise and readable by combining HTML and JavaScript in one place. ## Why do we use JSX? 1. **Declarative Syntax**: JSX provides a more declarative way to define UI components compared to plain JavaScript. This makes the code easier to understand and maintain. 2. **Component-Based Architecture**: React encourages building applications as a collection of reusable components. JSX makes it easy to define these components in a way that resembles the final UI structure, making the code more intuitive. 3. **Integration of HTML and JavaScript**: With JSX, you can seamlessly integrate HTML markup and JavaScript logic within the same file, improving code readability and reducing context switching. 4. **Performance Optimization:** JSX allows React to perform optimizations like Virtual DOM diffing, which helps in efficiently updating the UI without re-rendering the entire DOM tree. 5. **Static Type Checking:** When used with tools like TypeScript or Flow, JSX enables static type checking, catching errors at compile time rather than runtime, thus enhancing code reliability. Features of JSX - **HTML-like Syntax**: Write HTML tags directly in your JavaScript code. - **JavaScript Expressions**: Embed JavaScript expressions within curly braces {}. - **Component Nesting:** Easily nest components and manage the structure of your UI. ## Basic Rules of JSX 1. **Return a Single Root Element** Every JSX expression must have a single root element. This means you need to wrap multiple elements in a parent element, like a div, or use React fragments. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/em1rtrbgdrn37pxyg7cd.png) As seen above, the return is under one div element. It acts as the root element. 2. **Close all tags.** In JSX, all tags must be closed, even the self-closing ones like ``` <img /> ``` and ``` <input />. ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s1cw85tz5cvifuuk8rer.png) As seen above the image tag has been enclosed within the div tag. 3. **camelCase for Attributes** Use camelCase for naming attributes instead of the standard HTML attribute names. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wh9f56yol7rv9vsgr66o.png) As seen in the example, we have use the attribute 'className' as opposed to the normal class used in html syntax. Now lets move further and get to know how to render a list of elements in JSX. [Rendering a list of elements in JSX](https://dev.to/ark7/rendering-a-list-of-elements-in-jsx-35am)
ark7
1,875,530
Top Tools for Web Developers
Hey there, fellow web devs! If you are a newbie setting sail in the expansive world of web...
0
2024-06-03T14:26:31
https://dev.to/buildwebcrumbs/top-tools-for-modern-web-developers-1ba3
beginners, productivity, webdev
**Hey there, fellow web devs!** If you are a newbie setting sail in the expansive world of web development, having the right tools can make or break your journey, and sometimes it is difficult to choose the right ones. there are just soo many to choose from! 🤯 In this article, we’re diving into the must-have tools that are not just popular, but game-changers in boosting your productivity, simplifying complex tasks, and making your coding life a heck of a lot easier (and more fun!). *P.S. This article is for beginners, but more seasoned devs are welcome to read and add their tips.* --- ### 1. Code Editors and IDEs A code editor is a program designed specifically for editing code. It provides features like syntax highlighting, auto-completion, and debugging tools to make coding more efficient and less error-prone. **Visual Studio Code (VS Code)** VS Code is a lightweight but powerful source code editor that runs on your desktop. It comes with support for JavaScript, TypeScript, Node.js, and has a rich ecosystem of extensions for other languages (including C++, C#, Python, PHP, and more). Its built-in Git support, robust debugging, and intelligent code completion (IntelliSense) make it a favorite among developers. **Sublime Text** Known for its speed and efficiency, Sublime Text is a versatile editor that can handle and switch between multiple projects. Its vast array of keyboard shortcuts boosts productivity and allows developers to write and edit code faster. **JetBrains WebStorm** WebStorm is specifically tailored for JavaScript developers. It offers advanced coding assistance for JavaScript, HTML, and CSS, including autocomplete, automated refactoring, and powerful navigation tools. WebStorm integrates with popular version control systems and offers a smooth developer experience for building web, server, and mobile applications. --- ### 2. Version Control Systems Version control systems are software tools that help software teams manage changes to source code over time. They keep track of every modification to the code in a special kind of database. If a mistake is made, developers can turn back the clock and compare earlier versions of the code to help fix the mistake while minimizing disruption to all team members. **Git** Git is the most widely used modern version control system in the world today. It allows multiple developers to work together on complex projects without the fear of conflicting changes. Platforms like GitHub and GitLab also provide a visual interface on top of Git’s capabilities, making it more accessible to manage projects. **Mercurial** As an alternative to Git, Mercurial is designed with simplicity and performance in mind. It's a distributed version control system that allows developers to efficiently manage large codebases. --- **Want to discover the future of Web Development?** {% cta https://www.webcrumbs.org/waitlist %} Learn more here with Webcrumbs {% endcta %} --- ### 3. Front-End Frameworks A front-end framework is a standard structure that allows developers to build user interfaces. It provides a way to organize and structure your code and offers reusable components, which makes web development faster, easier, and more scalable. Using a framework can greatly streamline the development process, especially for complex projects that need to be built quickly. **React** React is a declarative, efficient, and flexible JavaScript library for building user interfaces. It lets developers compose complex UIs from small and isolated pieces of code called “components.” **Vue.js** Vue.js is admired for its simplicity and is approachable for beginners. Its core library focuses on the view layer only, making it easy to pick up and integrate with other libraries or existing projects. **Angular** Angular is a platform and framework for building client-side applications using HTML and TypeScript. It is well-suited for developing large-scale enterprise applications and supports an extensive array of features like two-way data binding, modularization, templating, AJAX handling, dependency injection, and more. --- ### 4. CSS Frameworks and Preprocessors CSS frameworks are pre-prepared software frameworks that are meant to allow for easier, more standards-compliant styling of web pages using the Cascading Style Sheets language. Preprocessors, on the other hand, add extra functionality to CSS to keep our stylesheets well-organized and allow us to write code faster. **Bootstrap** Bootstrap is the world’s most popular framework for building responsive, mobile-first sites. It includes HTML and CSS-based design templates for typography, forms, buttons, navigation, and other interface components, as well as optional JavaScript extensions. **Tailwind CSS** Tailwind CSS is a utility-first CSS framework for rapidly building custom designs. Unlike other CSS frameworks, Tailwind does not come with predefined components but instead provides utility classes to create unique designs directly in your markup. **Sass** Sass is a mature, stable, and powerful professional-grade CSS extension language. It helps in making CSS fun again and includes features like variables, nesting, and mixins that allow for more flexible and reusable code. --- ### 5. Build Tools and Task Runners Build tools and task runners are software that automate the repetitive tasks in the software development process like minifying code, compiling source code into binary code, packaging binary code, and running automated tests. This automation makes the development process more consistent and saves developers a lot of time. **Webpack** Webpack is a static module bundler for modern JavaScript applications. When Webpack processes your application, it internally builds a dependency graph that maps every module your project needs and generates one or more bundles. **Gulp** Gulp is a toolkit that helps developers automate painful or time-consuming tasks in their development workflow. It is used for task automation of common tasks like CSS minifying, image optimization, code linting, and building web pages. ### Which tools are you using? We hope this guide helps you in choosing the right tools to enhance your web development projects. Try integrating some of these tools into your workflow and see the difference for yourself. Share your experiences with us or recommend other tools that deserve a mention! At WebCrumbs we are growing a community for devs to get along and learn together! {% cta [https://discord.gg/4PWXpPd8HQ](https://discord.gg/4PWXpPd8HQ "https://discord.gg/4PWXpPd8HQ") %} JOIN US HERE{% endcta %}
opensourcee
1,875,384
Fatiamento🐍
1. Fatiamento: frase[3]: Extrai o quarto caractere da string, que é "Ú". frase[3:13]: Obtém a...
0
2024-06-03T13:18:12
https://dev.to/senhorita_zi/fatiamento-2c8h
python, fatiamento, programming, devops
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndtn2z1avu390ix75jvm.png) **1. Fatiamento:** frase[3]: _Extrai o quarto caractere da string, que é "Ú"._ frase[3:13]: _Obtém a substring da posição 3 até a 12 (não incluindo a 13ª), resultando em "nica maneira de aprender"._ **Observações sobre o fatiamento:** * Os índices começam em 0, então a primeira letra tem índice 0. * É possível omitir o índice final, nesse caso, pegando até o final da string. * Passos podem ser utilizados para definir a frequência de extração de caracteres, por exemplo: `frase[0::2]`. **2. Contagem de caracteres:** frase.count("a"): _Conta quantas vezes a letra "a" aparece em toda a frase, resultando em 4._ frase.count("a", 0, 13"): _Conta quantas vezes "a" aparece entre os índices 0 e 12 (exclusivo), resultando em 2._ **3. Localização de substrings:** frase.find("nei"): _Retorna a posição onde a substring "nei" inicia, nesse caso, 10._ frase.find("Ventura"):_ Como "Ventura" não está presente, retorna -1, indicando que a substring não existe._ **4. Verificação de existência:** _"unica" in frase: Verifica se a substring "unica" está presente na frase, retornando `True`._ **5. Substituição:** frase.replace("fazendo ", "praticando"):_ Substitui todas as ocorrências de "fazendo " por "praticando", alterando a frase para "A única maneira de aprender é praticando."._ **6. Conversão de maiúsculas/minúsculas:** frase.upper(): _Converte todas as letras para maiúsculas: "A ÚNICA MANEIRA DE APRENDER É FAZENDO."._ frase.lower(): _Converte todas as letras para minúsculas: "a única maneira de aprender é fazendo."._ frase.capitalize(): _Converte a primeira letra de cada palavra para maiúscula: "A Única Maneira De Aprender É Fazendo."._ frase.title(): _Converte a primeira letra de cada frase para maiúscula e o restante para minúscula: "A Única Maneira de Aprender é Fazendo."._ **7. Remoção de espaços:** frase.strip(): _Remove espaços em branco no início e no final da frase._ frase.rstrip(): _Remove espaços em branco apenas no final da frase._ frase.lstrip(): _Remove espaços em branco apenas no início da frase._ **8. Separação em palavras:** frase.split(): _Divide a frase em uma lista de palavras, utilizando espaços em branco como separadores: `["A", "única", "maneira", "de", "aprender", "é", "fazendo."].`_ **9. Inserção de separadores:** "_".join(frase): _Junta as palavras da frase com um "_" entre elas: "A_única_maneira_de_aprender_é_fazendo_.". **10. Adição à lista (não presente no código fornecido):** lista.append(elemento):_ Adiciona o elemento no final da lista. No exemplo, `lista.append("novo elemento")` adicionaria "novo elemento" ao final da lista. _
senhorita_zi
1,875,529
piano music application
Hi , Learning to play the piano is an incredibly rewarding activity, but it can often feel ...
0
2024-06-03T14:26:15
https://dev.to/hussein09/piano-music-application-505
webdev, javascript, beginners, html
Hi , Learning to play the piano is an incredibly rewarding activity, but it can often feel difficult. But don’t worry – with the right strategy, you’ll be playing your favorite songs in no time. **Follow all the latest news via:** **github:** https://github.com/hussein-009 **codepen:** https://codepen.io/hussein009 **Contact me if you encounter a problem** {% codepen https://codepen.io/hussein009/pen/VwOpxym %}
hussein09
1,875,673
100 Salesforce Experience Cloud Interview Questions and Answers
Salesforce Experience Cloud is a dynamic digital experience platform that enables organizations to...
0
2024-06-03T16:49:59
https://www.sfapps.info/100-salesforce-experience-cloud-interview-questions-and-answers/
blog, interviewquestions
--- title: 100 Salesforce Experience Cloud Interview Questions and Answers published: true date: 2024-06-03 14:26:00 UTC tags: Blog,InterviewQuestions canonical_url: https://www.sfapps.info/100-salesforce-experience-cloud-interview-questions-and-answers/ --- Salesforce Experience Cloud is a dynamic digital experience platform that enables organizations to build tailored, connected interactions across various digital touchpoints for customers, partners, and employees. Utilizing Salesforce CRM data, it allows for the creation of community portals, help forums, membership sites, and other digital environments to boost engagement and collaboration. ### Position Requirements for a Salesforce Experience Cloud Specialist - **Technical Skills:** Proficiency in Salesforce configurations, customizations, and Experience Cloud platform capabilities, along with web technologies like HTML, CSS, and JavaScript. - **Experience:** Demonstrated deployment of Salesforce Experience Cloud solutions, with knowledge in user roles, permissions, security, SSO, custom domains, and Salesforce CMS. - **Qualifications:** Relevant Salesforce certifications, preferably a Bachelor’s degree in a related field. - **Professional Qualities:** Strong analytical, problem-solving, communication, and collaboration skills, with a commitment to continuous learning and staying current with Salesforce updates. ## List of 100 Salesforce Experience Cloud Interview Questions and Answers - [Interview Questions and Answers for a Junior Salesforce Experience Cloud Specialist](#aioseo-interview-questions-and-answers-for-a-junior-salesforce-experience-cloud-specialist) - [Interview Questions and Answers for a Middle Salesforce Experience Cloud Developer](#aioseo-interview-questions-and-answers-for-a-middle-salesforce-experience-cloud-developer) - [Interview Questions and Answers for a Senior Salesforce Experience Cloud Software Engineer](#aioseo-interview-questions-and-answers-for-a-senior-salesforce-experience-cloud-software-engineer) - [Scenario Based Interview Questions and Answers for a Salesforce Experience Cloud Consultant](#aioseo-scenario-based-interview-questions-and-answers-for-a-salesforce-experience-cloud-consultant) - [Technical Interview Questions for a Salesforce Experience Cloud Specialist](#aioseo-technical-interview-questions-for-a-salesforce-experience-cloud-specialist) The most common question at any interview is the presence of Salesforce certification. Do you have it covered? If not yet – we recommend taking courses offered by the FocusOnForce Team. [Explore Certification Practice Exams](https://www.sfapps.info/experience-cloud-certification-study-guide/) ![](https://www.sfapps.info/wp-content/uploads/2024/05/banner-3-svg.svg) ## Interview Questions and Answers for a Junior Salesforce Experience Cloud Specialist 1. **What is Salesforce Experience Cloud?** Salesforce Experience Cloud, formerly known as Community Cloud, is a Salesforce platform that enables businesses to create connected, personalized digital experiences for customers, partners, and employees. It integrates directly with Salesforce CRM to provide seamless access to data and functionalities. 1. **Can you explain the difference between Salesforce Sales Cloud and Experience Cloud?** Sales Cloud is primarily a sales tool, designed to manage sales processes and customer relationships effectively. In contrast, Experience Cloud is designed to extend the organizational data and processes to external stakeholders through digital experiences like portals, forums, and sites. 1. **What are some common use cases for Salesforce Experience Cloud**? Common use cases include building partner portals for channel sales, customer service portals for support, employee portals for internal communication, and community forums for customer engagement and feedback. 1. **What is a community in the context of Salesforce Experience Cloud?** In Experience Cloud, a community refers to a branded space for users to connect and collaborate. Communities can be tailored for different audiences, such as customers, partners, or employees, to facilitate specific interactions and processes. 1. **How do you ensure a community is accessible in Experience Cloud?** Accessibility can be ensured by adhering to web accessibility standards such as WCAG and using Salesforce’s built-in accessibility features. This includes using keyboard navigable components, ARIA roles, and ensuring that the community is usable with screen readers. 1. **Can you name some templates available in Experience Cloud and their purposes?** Salesforce provides various templates, such as the Customer Service template to help organizations create a self-service support portal, and the Partner Central template for managing partner relationships. Each template comes with pre-built functionality to accelerate deployment. 1. **What are Topics in Salesforce Experience Cloud?** Topics in Experience Cloud categorize content and discussions, making them more discoverable. They are similar to tags but are more integrated into the navigation of communities to enhance user experience and content organization. 1. **Explain the role of Lightning Components in Experience Cloud.** Lightning Components are reusable building blocks for creating dynamic community pages in Experience Cloud. They can be customized and configured to add functionality such as displaying custom data, integrating third-party systems, or facilitating user interactions. 1. **What is Audience Targeting in Experience Cloud and how does it work?** Audience Targeting allows you to customize what different segments of users see in the community. It uses criteria like profile, location, or custom attributes to tailor the user interface, presenting relevant content and experiences to different user groups. 1. **How do you handle security and permissions in an Experience Cloud community?** Security and permissions in Experience Cloud are managed through a combination of Salesforce’s standard security model (including roles, profiles, and sharing rules) and community-specific settings like sharing sets and community roles. 1. **What is a Sharing Set?** Sharing sets are used in Experience Cloud to grant external users access to data that would otherwise be restricted. They allow users to see records associated with their account or contact, based on user criteria defined in the sharing set. 1. **How would you integrate third-party applications into Experience Cloud?** Integration can be achieved through the use of APIs, pre-built connectors, or custom Lightning Components that interact with external systems. Salesforce’s AppExchange also offers many ready-made integrations. 1. **Can you explain what CMS Connect is?** CMS Connect is a feature in Experience Cloud that allows you to seamlessly integrate and display content from an external CMS (Content Management System). It pulls content like blogs, images, and news into the community without manual updates. 1. **Describe how to use Google Analytics with Experience Cloud.** Google Analytics can be integrated with Experience Cloud by adding the Google Analytics tracking code to the community pages. This allows for tracking user behavior, page views, and custom events within the community. 1. **What is a Community Manager in Experience Cloud?** The Community Manager is an administrative interface used to manage and monitor the community. It provides tools for moderating content, analyzing community health and activity, managing users, and customizing the look and feel of the community. 1. **How do you optimize the mobile experience for an Experience Cloud community?** The mobile experience can be optimized by using responsive design templates provided by Salesforce, testing the community on various devices, and customizing the mobile-specific settings within the Experience Cloud setup. 1. **What are Recommendations in Experience Cloud?** Recommendations in Experience Cloud are personalized suggestions that appear to users based on their activities and profile data. These can include content, groups, or connections that might be relevant to the user. 1. **Explain how Salesforce Identity is used in Experience Cloud.** Salesforce Identity provides identity services to Experience Cloud, such as single sign-on (SSO), social sign-on, and multi-factor authentication. It helps streamline access and improve security for community users. 1. **What are the key performance indicators (KPIs) for a successful Experience Cloud community?** Key KPIs might include active user count, user engagement rate, case deflection rate, and customer satisfaction scores. These indicators help measure the impact and value of the community to the organization. 1. **Describe the steps to migrate from an older community template to a newer one in Experience Cloud.** Migration involves planning the new community structure, recreating customizations like branding and components, testing the new community thoroughly, and then migrating data and users. Salesforce provides tools and documentation to support these migrations. These Experience Cloud interview questions cover fundamental concepts, practical applications, and strategic management of Salesforce Experience Cloud, giving a broad overview useful for both interview preparation and general knowledge. **You might be interested:** [Salesforce Service Cloud Interview Questions](https://www.sfapps.info/100-salesforce-service-cloud-interview-questions-and-answers/) ### Insight: When recruiting a Junior Salesforce Experience Cloud Specialist, it’s essential to evaluate both technical skills and potential for growth. Start with foundational Salesforce Experience Cloud interview questions that test understanding of basics, like its key features and functionalities. This assesses their technical grounding and ability to articulate insights clearly. Since this is a junior position, prioritize assessing adaptability and eagerness to learn. Questions about how candidates keep up with Salesforce updates and their approach to new technologies can indicate their commitment to ongoing professional development. ## Interview Questions and Answers for a Middle Salesforce Experience Cloud Developer 1. **What is the purpose of Lightning Bolt solutions in Experience Cloud?** Lightning Bolt solutions in Experience Cloud are templates used to deploy industry-specific solutions quickly. They include pre-built themes, pages, and components tailored for specific business needs and can be further customized. 1. **How can you customize the user interface of a community using Experience Builder?** Experience Builder allows for drag-and-drop customization of community pages. You can add, remove, and rearrange components, change layouts, and apply different themes and branding options without needing to write code. 1. **What is the process to enable multilingual support in a community?** To enable multilingual support, you must first set up multiple languages in the Salesforce setup, then translate content and labels using the Translation Workbench or third-party translation services. Finally, configure the community to display the appropriate language based on the user’s settings or preferences. 1. **Can you explain the concept of “Super Users” in Experience Cloud communities and their benefits?** Super Users are community members who are granted elevated permissions, allowing them to access and moderate content usually restricted to internal users. This is particularly useful in customer service communities for peer-to-peer support and enhances community engagement. 1. **How do you implement Single Sign-On (SSO) for Experience Cloud?** Implementing SSO involves setting up an Identity Provider (IdP) that supports SAML 2.0 or OpenID Connect, configuring Salesforce as the Service Provider, and mapping attributes between Salesforce and the IdP. This setup simplifies the login process for users by using one set of credentials across multiple applications. 1. **Describe a method to optimize the performance of a Salesforce Experience Cloud site.** Performance optimization can be achieved by minimizing the use of heavy custom code, optimizing images and static resources, using Salesforce CDN for content delivery, and regularly reviewing the Performance Analysis in Experience Builder. 1. **What are Data Categories, and how do they work in Experience Cloud?** Data Categories are used to organize and control access to knowledge articles in a community. They help in structuring content so that users can easily find relevant information based on topics such as product lines or service types. 1. **Explain the difference between Page Variations and Audience Targeting in Experience Cloud.** Page Variations allow you to create different versions of the same page for A/B testing or segment-specific content. Audience Targeting, on the other hand, shows personalized content to users based on predefined criteria like role, profile, or region without altering the page structure. 1. **How do you ensure data security when exposing sensitive information in a community?** Data security in communities is managed by strict sharing rules, profiles, and permission sets that control access to data. Additionally, using field-level security and record types can help manage what data is visible to community users. 1. **What is the purpose of the Reputation system in Experience Cloud, and how do you configure it?** The Reputation system in Experience Cloud motivates user engagement by awarding points and badges for community activities. It is configured in the Community Management settings, where you can define the actions that earn points and the rewards for reaching certain levels. 1. **Can you explain the role of Salesforce CMS in Experience Cloud?** Salesforce CMS is used to create, manage, and deliver content across Salesforce platforms, including Experience Cloud. It allows you to share multimedia content and articles within the community, facilitating better engagement and resource sharing. 1. **Discuss the steps involved in migrating a community to a new org.** Migrating a community involves several steps, such as exporting data and configurations from the original org, preparing the new org by setting up similar structures and permissions, importing the data, and then conducting thorough testing to ensure functionality. 1. **What are the best practices for using custom Lightning components in Experience Cloud?** Best practices include following the Lightning Component Framework’s performance best practices, ensuring components are secure against XSS and CSRF, making them as reusable as possible, and testing them extensively in the context of the community. 1. **How can you use Analytics within Experience Cloud to enhance user experience?** Analytics can be used to track user behavior, engagement levels, and content effectiveness within the community. Insights gained can inform decisions on content creation, community layout adjustments, and targeted marketing strategies. 1. **Describe how to use the API to extend the functionality of Experience Cloud.** The API can be used to integrate external systems, automate processes, or enhance community features. Common uses include synchronizing user data, pulling or pushing content from other systems, and automating moderation or administrative tasks. 1. **Explain the process of setting up a custom domain for an Experience Cloud site.** Setting up a custom domain involves purchasing a domain name, configuring it with DNS settings to point to Salesforce servers, and then setting up the domain in the Salesforce org to ensure secure and branded access to the community. 1. **What considerations should be taken when integrating e-commerce functionalities into an Experience Cloud community?** Key considerations include ensuring secure transactions, providing seamless user experience, maintaining data synchronization between Salesforce and e-commerce platforms, and complying with local and international commerce regulations. 1. **How do you manage and moderate user-generated content in Experience Cloud?** Content moderation can be managed through automated moderation rules, manual review processes, and community guidelines that enforce appropriate content standards. Tools like moderation queues and automated alerts assist in maintaining content quality. 1. **What are the implications of enabling self-registration in Experience Cloud?** Enabling self-registration facilitates easier access for new users but requires robust security measures like email verification, captcha, and compliance with data protection regulations to prevent abuse and ensure user authenticity. 1. **How do you troubleshoot common issues in Experience Cloud communities?** Common troubleshooting steps include checking permissions and sharing settings, verifying component configurations, reviewing system logs for errors, and testing in different user contexts to replicate and diagnose issues. These Experience Cloud Salesforce interview questions are designed to probe the candidate’s experience and understanding of complex functionalities within the Salesforce Experience Cloud, reflecting their capability to handle typical challenges in mid-level roles. ### Insight: For a middle Salesforce Experience Cloud Specialist, the interview should delve deeper into both technical expertise and the ability to handle more complex project responsibilities. It’s important to explore their proficiency with advanced features of Experience Cloud, such as customization, integration capabilities, and troubleshooting complex issues. Scenario-based questions that simulate real-world problems are ideal for assessing their technical acumen and problem-solving strategies. ## Interview Questions and Answers for a Senior Salesforce Experience Cloud Software Engineer 1. **How do you design an architecture for a large-scale global Experience Cloud implementation?** Designing an architecture involves considering multiple factors like scalability, security, multi-language support, data residency requirements, and integration with other systems. Use a combination of Salesforce best practices, custom settings for localization, and robust data architecture to ensure performance and compliance globally. 1. **What are the key considerations for data synchronization and integration in a multi-org Experience Cloud environment?** Key considerations include understanding the data flow, volume, and frequency of updates. Implement middleware or integration tools like MuleSoft for robust data synchronization. Ensure data integrity and security by using encrypted channels and adhering to compliance standards. 1. **Describe a complex community migration you have managed. What were the challenges and how did you address them?** In complex migrations, challenges can include data loss, feature parity, and user adoption. Address these by thorough planning, phased rollouts, rigorous testing, and clear communication with stakeholders. Use tools like Salesforce’s Metadata API and change sets for efficient migration of configurations and customizations. 1. **How would you handle performance optimization for a community that has high user engagement and complex integrations?** To optimize performance, implement efficient caching strategies, optimize API calls and database queries, and use Salesforce’s performance analysis tools to identify bottlenecks. Additionally, ensure that all integrations are asynchronous where possible to avoid impacting user experience. 1. **Explain how to ensure 24/7 uptime for critical Experience Cloud applications.** Ensure uptime by using Salesforce’s robust infrastructure with failover mechanisms, regularly scheduled backups, and real-time monitoring tools. Implement a disaster recovery plan and conduct regular drills to test the responsiveness of the system under failure conditions. 1. **What strategies would you use to manage and optimize SEO for Experience Cloud sites?** Optimize SEO by ensuring that the community is crawlable by search engines, using relevant keywords, optimizing metadata, and implementing structured data. Also, use Salesforce features to customize URLs and improve load times to enhance search engine ranking. 1. **Discuss how you would secure sensitive customer data in a community accessed by external users.** Secure sensitive data by implementing robust access controls, using field-level security, and ensuring data is encrypted at rest and in transit. Regularly audit permissions and conduct vulnerability assessments to maintain security. 1. **How do you manage large-scale user adoption and change management for a new Experience Cloud deployment?** Manage adoption through a comprehensive change management strategy that includes stakeholder engagement, effective training programs, clear communication, and phased rollouts. Gather feedback and provide support to ease the transition for users. 1. **What are the best practices for custom development in Experience Cloud?** Best practices include adhering to Salesforce’s development guidelines, writing reusable and maintainable code, and ensuring that customizations are upgradeable and scalable. Use version control systems and continuous integration/continuous deployment (CI/CD) practices to manage the development lifecycle efficiently. 1. **How do you measure the success of an Experience Cloud implementation?** Success can be measured through specific KPIs such as user engagement rates, case deflection rates, customer satisfaction scores, and overall ROI. Regularly analyze these metrics and gather user feedback to continuously improve the community. 1. **Can you describe a scenario where you utilized Einstein Analytics within Experience Cloud?** Einstein Analytics can be used to provide deeper insights into user behavior and community health. For example, analyze data on user engagement and content effectiveness to tailor marketing strategies and improve user experience. 1. **Explain the considerations for implementing multi-factor authentication in Experience Cloud.** Considerations include user experience, regulatory requirements, and the type of data accessed. Implement multi-factor authentication using Salesforce’s built-in capabilities or third-party tools, ensuring it aligns with the security needs without overly complicating the login process. 1. **What is your approach to handling legacy system integration with Experience Cloud?** Approach legacy system integration by assessing the current architecture, determining integration points, and choosing the appropriate integration method (real-time vs. batch). Use middleware if necessary to facilitate communication between systems and ensure data consistency. 1. **How do you ensure that custom Lightning components are efficient and secure?** Ensure efficiency by optimizing code, minimizing server calls, and using efficient data handling like pagination and lazy loading. Secure components by adhering to Salesforce’s security best practices, such as checking for CRUD/FLS and avoiding inline JavaScript. 1. **Discuss your strategy for ongoing maintenance and updates in Experience Cloud.** Ongoing maintenance should include regular system audits, updates to stay aligned with Salesforce releases, and continuous monitoring for performance and security. Implement a structured process for handling updates and configurations to minimize disruption. 1. **What is your approach to disaster recovery and business continuity in Experience Cloud?** Approach disaster recovery by having a detailed and tested plan that includes data backups, system failovers, and alternative operational procedures. Ensure business continuity by having redundant systems in place and clear protocols for various types of incidents. 1. **How do you handle scalability challenges in rapidly growing communities?** Handle scalability by planning capacity ahead of time, utilizing scalable features of Salesforce like Elastic Compute Resources, and regularly reviewing performance metrics to adjust resources as needed. 1. **What are the complexities of implementing custom branding in Experience Cloud?** Complexities include maintaining brand consistency across all devices and platforms, ensuring that custom branding does not affect loading times or user experience, and adhering to web accessibility standards. 1. **How do you manage project stakeholders in a complex Experience Cloud rollout?** Manage stakeholders by establishing clear communication channels, regular updates, and managing expectations through transparent sharing of project goals, timelines, and potential challenges. 1. **Can you explain how you would set up a governance model for an Experience Cloud platform?** Set up a governance model by defining clear roles and responsibilities, establishing usage policies, and implementing a framework for ongoing evaluation and optimization of community operations. These Experience Cloud Salesforce interview questions aim to probe the depth of experience and the ability to handle complex situations, key for a senior-level professional in the Salesforce Experience Cloud. **You might be interested:** [Salesforce Sales Cloud Interview Questions](https://www.sfapps.info/100-salesforce-sales-cloud-interview-questions-and-answers/) ### Insight: When recruiting for a Senior Salesforce Experience Cloud Specialist, the focus should be on advanced expertise and strategic impact. Candidates should demonstrate deep technical proficiency and the ability to architect, implement, and manage complex solutions that serve broad business needs. Interview questions should probe their experience with large-scale deployments, custom integrations, and their approach to data security and compliance within the Experience Cloud framework. Equally critical is assessing leadership and communication skills. Candidates should articulate past experiences where they have led teams, driven cross-functional projects, and influenced IT strategy. ## Scenario Based Interview Questions and Answers for a Salesforce Experience Cloud Consultant 1. **A client wants to launch a global community for diverse customer groups. What considerations would you take for localization and global roll-out?** Considerations should include multi-language support using Salesforce’s built-in translation capabilities, regional data compliance and storage considerations, localization of content and branding, and planning for varied timezone support for users and maintenance. 1. **A community is experiencing slow page load times. How would you diagnose and resolve this issue?** Diagnose the issue using Salesforce’s Community Page Optimizer and check for heavy component usage, unoptimized images, and excessive API calls. Resolve by simplifying page design, using Salesforce CDN for image hosting, and optimizing component code to reduce server load. 1. **A business wants to integrate their existing e-commerce platform with Experience Cloud. What steps would you take to ensure a seamless integration?** Steps include evaluating current APIs for compatibility, planning data synchronization strategies (real-time vs batch), ensuring secure data transmission, and customizing the community interface for a cohesive user experience. Also, conduct thorough testing to ensure integration works seamlessly across systems. 1. **During a high-traffic event, a community’s performance dropped. How would you handle this situation immediately and in the long term?** Immediately, check Salesforce System Status to determine if the issue is platform-wide and optimize resource-intensive operations. Long-term, plan for scalable cloud resources, implement efficient caching, and prepare an incident response strategy for future events. 1. **A client requires a custom loyalty program integrated within their customer community. Describe your approach to design and implement this feature.** Design by outlining the program’s rules and rewards within Salesforce objects and processes. Implement by developing custom components or using existing solutions from AppExchange, ensuring they integrate well with the community’s data model and are scalable for future enhancements. 1. **Users are reporting difficulty in navigating the community and finding information. How would you improve the user experience?** Improve user experience by conducting usability tests to identify pain points, simplifying navigation based on user feedback, enhancing search functionality with AI-driven suggestions, and organizing content into clear, accessible categories. 1. **The client wants real-time customer support features within the community. What solutions would you implement?** Implement Salesforce Service Cloud integration for real-time case management, enable Live Agent for instant chat support, and set up a Chatbot for handling common queries automatically to enhance support efficiency. 1. **A recent update to the community caused some custom components to fail. How would you resolve this issue?** First, roll back the update if severely impacting operations. Review the custom components’ code to identify compatibility issues with the update, fix these issues, and conduct thorough testing in a sandbox environment before reapplying the update. 1. **The marketing team wants to use the community to conduct targeted campaigns. What tools would you use to segment and target users effectively?** Use Salesforce CRM data for user segmentation, leveraging fields like location, product usage, and engagement history. Utilize built-in marketing tools like Salesforce Marketing Cloud for campaign management and personalized content delivery. 1. **A security audit revealed potential vulnerabilities in community data access. What steps would you take to enhance security?** Review and tighten security settings, including profiles, permission sets, and sharing rules. Implement field-level security and audit trail monitoring, and educate community managers on best security practices. 1. **You’re tasked with integrating third-party content into the community without hosting the content on Salesforce. How would you proceed?** Utilize Salesforce CMS Connect to pull content from the third-party system, ensuring the integration respects data format and security requirements, and present it seamlessly within the community. 1. **Post-launch, a community’s user engagement is lower than expected. What methods would you use to analyze and boost engagement?** Analyze user behavior with Salesforce Einstein Analytics to identify drop-off points and content gaps. Enhance engagement by introducing interactive elements like polls and gamification, and improve content relevance and personalization. 1. **The client wishes to automate content moderation in their community. What features would you recommend?** Recommend automated moderation tools that leverage AI to filter and flag inappropriate content, set up keyword alerts, and configure moderation rules based on community standards. Regularly update the moderation filters and review flagged content manually to ensure accuracy. 1. **There’s a request to create a seamless user transition between multiple Salesforce communities. How would you facilitate this?** Implement Single Sign-On (SSO) across communities, ensuring a unified login experience. Design navigation that includes interlinking between communities, and maintain consistent branding and user interface design. 1. **A client needs to track the ROI from their community investment. What metrics and tools would you use to report this?** Use Salesforce’s built-in analytics to track metrics such as user acquisition, engagement rates, case deflection, and customer satisfaction. Calculate ROI by comparing these metrics against community operating costs and increased revenue or cost savings. 1. **How would you approach upgrading from an older community template to a new one without losing custom functionality?** Plan the upgrade by first testing the new template in a sandbox environment. Gradually migrate custom functionalities, ensuring compatibility and performance, and involve end-users early in the process for feedback and adjustment. 1. **The client wants to integrate external reviews and ratings into the community. Describe your technical approach.** Integrate using APIs to pull reviews and ratings from external platforms. Ensure data is presented in a user-friendly manner within the community, and handle synchronization issues to maintain data accuracy and timeliness. 1. **To enhance knowledge sharing, the client wants a wiki-style feature in their community. How would you implement this?** Leverage Salesforce Knowledge to create a wiki-style setup, enabling community members to contribute articles, which are then reviewed and published by moderators. Customize the interface to support easy navigation and content discovery. 1. **A user reports being unable to access certain premium content areas in the community. How would you troubleshoot and solve this issue?** Verify the user’s permissions and roles, check the content’s sharing settings, and ensure there are no system-wide issues affecting access. Adjust configurations as necessary and provide direct support to resolve any misunderstandings about access privileges. 1. **How would you handle the migration of a community to a newer Salesforce instance while ensuring minimal downtime?** Conduct thorough planning and testing in a sandbox environment. Schedule the migration during off-peak hours, communicate clearly with users about expected downtime, and provide continuous support during and after the migration to address any issues quickly. These scenario-based questions require a candidate to draw on both technical expertise and strategic thinking, showcasing their ability to handle real-world challenges in Salesforce Experience Cloud. ### Insight: Scenario-based questions are invaluable for evaluating Salesforce Experience Cloud Specialists, as they provide insight into the candidate’s practical knowledge and problem-solving skills in real-world contexts. These questions allow candidates to demonstrate their technical expertise, decision-making process, and creativity in resolving complex issues. For instance, asking a candidate to outline how they would manage a community launch or integrate third-party systems into Experience Cloud can reveal their depth of understanding and strategic thinking. ## Technical Interview Questions for a Salesforce Experience Cloud Specialist Need professional help with conducting technical interview? Get top Salesforce interviewer from our parent company! [Explore More](https://mobilunity.com/technical-interview-services/) ![](https://www.sfapps.info/wp-content/uploads/2024/05/banner-1-icon.svg) 1. **What is the role of Salesforce Experience Builder?** Experience Builder is a tool used for creating and customizing digital experiences within Salesforce Experience Cloud. It provides a drag-and-drop interface to design and manage community pages, allowing customization without extensive coding. 1. **Can you explain the difference between Permission Sets and Profiles in Salesforce Experience Cloud?** Profiles determine the baseline level of access a user has to objects and data, acting as a primary gatekeeper for user permissions. Permission sets extend these permissions without altering the base profile, allowing for more granular access control tailored to individual needs. 1. **What are the key considerations when configuring Search in a Salesforce community?** Key considerations include setting up data categories for organizing searchable content, defining which objects and fields are searchable, and customizing the search layout to enhance user experience. Additionally, implementing synonyms and promoting search results are also crucial for effective search functionality. 1. **Describe how to use Salesforce’s Lightning Web Components in Experience Cloud.** Lightning Web Components (LWC) are used in Experience Cloud for creating efficient and reusable custom components. They utilize modern web standards and can be designed to interact with Salesforce data and services securely. LWCs enhance the performance and scalability of community pages. 1. **How would you integrate external APIs into a Salesforce community?** External APIs can be integrated using Apex callouts. Set up named credentials for secure API connections, use Apex to handle the HTTP requests and responses, and ensure callouts are made asynchronously to maintain community performance. 1. **Explain the process of customizing a community template in Salesforce.** Customizing a community template involves using Experience Builder to modify layouts, styles, and components. Developers can add custom Lightning components or third-party integrations, adjust navigation menus, and apply specific branding guidelines to tailor the community’s appearance and functionality. 1. **What is the purpose of Salesforce CMS in Experience Cloud, and how does it integrate with other Salesforce products?** Salesforce CMS is designed to manage content centrally and deliver it across Salesforce ecosystems, including Experience Cloud. It allows users to create, store, and manage content in various formats, which can be easily shared across different Salesforce platforms like Marketing Cloud, Commerce Cloud, and more. 1. **Discuss the use of Chatter in Experience Cloud.** Chatter in Experience Cloud is used for social collaboration, enabling community members to connect, communicate, and share information efficiently. It supports features like feeds, groups, files, and topics, enhancing engagement and knowledge sharing within the community. 1. **How do you handle version control and deployment for custom developments in Salesforce communities?** Version control is managed through Salesforce DX or third-party services like Git. For deployment, use change sets, Salesforce DX, or CI/CD pipelines to move custom developments from sandboxes to production environments, ensuring all components are tested and validated before deployment. 1. **What is Audience Targeting, and how do you implement it in a Salesforce community?** Audience Targeting allows content and components to be displayed to specific user groups based on defined criteria such as profile, location, or custom attributes. Implement it using the Experience Builder to create audience rules and apply these rules to pages or components. 1. **Explain how Salesforce uses data categories and how they apply to Experience Cloud.** Data categories in Salesforce are used to classify and organize data, making it easier to manage access and visibility within communities. In Experience Cloud, data categories help control which knowledge articles or discussions are visible to specific community user groups. 1. **Describe the steps to secure a public Salesforce community.** Securing a public community involves configuring strict sharing settings, using SSL, setting up robust authentication mechanisms (like SSO or two-factor authentication), regularly auditing user activities, and ensuring all custom code is secure against common web vulnerabilities. 1. **How do you utilize Salesforce’s reporting and dashboards within Experience Cloud?** Use Salesforce reports and dashboards to track community engagement, user activity, and business metrics. Customize reports for specific community data, and embed these dashboards within community pages for real-time data visibility. 1. **What are the implications of enabling Public Access settings in a Salesforce community?** Enabling Public Access allows non-authenticated users to access certain community areas. While it can increase engagement and reach, it also poses security risks. It’s crucial to carefully configure which data and operations are accessible publicly to prevent unauthorized data access. 1. **Explain the concept of a Sharing Set and its use in Experience Cloud.** Sharing Sets allow community users with specific profiles to access records associated with them, based on user criteria. They are essential for providing controlled access to data in communities, especially in B2B and customer service scenarios. 1. **How can Einstein AI be integrated into Salesforce communities, and what are its benefits?** Einstein AI can be integrated to provide predictive insights, automated recommendations, and enhanced search capabilities. Benefits include personalized user experiences, improved content relevance, and streamlined community operations through AI-driven data analysis. 1. **Discuss the process of setting up a custom domain for an Experience Cloud site.** Setting up a custom domain involves registering the domain, configuring it in Salesforce with the Domain Management tool, creating a CNAME record to point to Salesforce servers, and applying for an SSL certificate to ensure secure connections. 1. **How do you manage user roles and access within a large community?** Manage roles and access by defining clear user profiles and permission sets, using role hierarchies, and grouping users into public groups or teams where necessary. Regularly review and adjust access as community scales or business needs change. 1. **What are the best practices for maintaining high availability and disaster recovery in Salesforce Experience Cloud?** Best practices include leveraging Salesforce’s robust infrastructure, configuring data backup processes, using replication and failover techniques, and regularly testing disaster recovery plans to ensure quick recovery from any disruptions. 1. **How do you handle customizations during Salesforce major releases in communities?** Handle customizations by testing them extensively in sandbox environments updated with the new release, reviewing release notes for any changes affecting custom code, and adjusting customizations to ensure compatibility and optimal performance post-update. These interview questions on Experience Cloud Salesforce should challenge a candidate’s technical expertise and understanding of complex Salesforce Experience Cloud environments, suitable for a high-level or specialist role. ### Insight: Technical interviews for Salesforce Experience Cloud Specialists should be meticulously structured to gauge both in-depth knowledge and practical application of the platform. It’s essential to design questions that explore the candidate’s proficiency in areas like system architecture, data management, and custom development within the Experience Cloud. This approach helps identify their ability to develop, customize, and troubleshoot complex systems effectively. Questions should also test the candidate’s familiarity with integrations, security protocols, and performance optimization, reflecting real-life issues they will face on the job. For instance, asking how they would handle specific scenarios involving API integration or user authentication can provide insight into their technical competence and problem-solving skills. ## Conclusion These sample questions and requirements provide a solid foundation for evaluating candidates for a Salesforce Experience Cloud Specialist position. While they cover essential skills and attributes, it’s important to tailor the interview process to match the specific needs and culture of your organization. These examples should serve as a starting point, enabling you to develop a comprehensive assessment strategy that identifies candidates who not only have the requisite technical abilities but also align well with your team’s dynamics and the company’s strategic goals. The post [100 Salesforce Experience Cloud Interview Questions and Answers](https://www.sfapps.info/100-salesforce-experience-cloud-interview-questions-and-answers/) first appeared on [Salesforce Apps](https://www.sfapps.info).
doriansabitov
1,875,528
Help customers find your business with the Azure Maps Store Locator
Maximize Visibility In today's digital age, the visibility of your business is paramount....
0
2024-06-03T14:23:50
https://clemens.ms/azure-maps-store-locator/
azure, maps, storelocator, azuremaps
## Maximize Visibility In today's digital age, the visibility of your business is paramount. Once you've captured customer interest online, the next crucial step is guiding them to your physical storefronts. [Azure Maps Store Locator](https://github.com/Azure-Samples/Azure-Maps-Locator) streamlines this journey, offering an interactive and intuitive experience that leads customers right to your doorstep. ## Simplifying Store Discovery Creating a basic store locator using Azure Maps is already a straightforward task, involving the loading of store locations onto a map and potentially setting up a simple search functionality. However, for larger organizations managing thousands of locations and requiring advanced filtering options, a more sophisticated solution is essential. Fortunately, the Azure Maps Store Locator, combining the power of various Azure services, caters precisely to these needs. ![Azure Maps Store Locator](https://clemens.ms/azure-maps-store-locator/storelocator.gif) ## Enhance Your Locator Experience Imagine a tool that effortlessly connects potential customers to the nearest branch of your business, tailored to their specific needs. Whether they're searching for a particular service or other points of interest, [Azure Maps Store Locator](https://github.com/Azure-Samples/Azure-Maps-Locator) is the user-friendly and adaptable tool you need. It's backed by a comprehensive management system, enabling you to create a rich locator experience. ## Feature-Rich Platform Azure Maps Store Locator boasts a wide array of features to improve your location-based offerings: - **Store Locator Backend**: Integrates REST APIs and a Store Locator Web Control for seamless management. - **Autocomplete Search**: Quickly find store names, addresses, POIs, or zip codes. - **Scalability**: Manages over 10,000 locations without a hitch. - **Proximity Insights**: View nearby stores and distance metrics. - **Location-Based Search**: Searches can be performed based on the user's current location. - **Travel Time Estimates**: Provides estimated travel times for various modes of transport. - **Comprehensive Store Details**: Access store information, directions, and more through interactive popups. - **Dynamic Filtering**: Users can filter stores based on specific features. - **Individual Store Pages**: Delve into what each store offers with detailed embedded maps. - **Security**: Employs Microsoft Entra ID for secure access to the location management system. - **Rich Data**: Store details include location, hours, photos, and the option to add custom features. - **Accessibility**: Features speech recognition and other accessibility enhancements. - **Effortless Deployment**: Easily deploy within your Azure ecosystem. ## Quick Setup Guide Deploying Azure Maps Store Locator is straightforward: 1. **Azure Subscription**: Confirm you have an Azure subscription. If not, obtain one for [free](https://azure.microsoft.com/free/) at the official Azure website. 2. **Azure Shell Access**: Sign in to Azure Shell. [https://shell.azure.com/](https://shell.azure.com/) 3. **Deployment**: Run the provided PowerShell script to install the Azure Maps Store Locator. ```powershell iex (iwr "https://samples.azuremaps.com/storelocator/deploy.ps1").Content ``` ![Architecture](https://clemens.ms/azure-maps-store-locator/architecture.jpg) Integrating the store locator into your website requires some HTML and JavaScript to call the store locator backend REST APIs. Once implemented, the solution is yours to modify and tailor the source code in your Azure Maps Store Locator according to your specific needs. The Azure Maps Store Locator empowers you to create and maintain an intuitive location-based search experience to delight your customers. Enhance your online presence today with the power of Azure Maps! You find the Azure Maps Store Locator source code on [GitHub](https://github.com/Azure-Samples/Azure-Maps-Locator).
cschotte
1,875,460
React Children
function Child() { return &lt;div&gt;This is children content&lt;/div&gt;; } // Add code only...
0
2024-06-03T14:19:48
https://dev.to/alamfatima1999/react-children-1n6b
```JS function Child() { return <div>This is children content</div>; } // Add code only here function Parent(props) { return ( <div> <h3>Parent Component</h3> {props.children} </div> ); } function App() { return ( <Parent> <Child /> </Parent> ); } ReactDOM.render(<App />, document.getElementById("root")); ```
alamfatima1999
1,820,374
Add Manjaro into WSL 2
I like Manjaro Linux. Too bad it is my second system, as I still need Windows software every day,...
0
2024-06-03T14:19:33
https://dev.to/damian_cyrus/add-manjaro-into-wsl-2-3f5i
wsl, linux, windows
--- canonical_url: --- I like Manjaro Linux. Too bad it is my second system, as I still need Windows software every day, which might or not work on a Linux distribution. For this I wanted to use it in WSL, and found this nice YouTube video: [Arch Linux on Windows Subsystem for Linux (WSL) - YouTube](https://www.youtube.com/watch?v=h0Wg_aknGdc). Based on the steps from the video I made my decision to use it as a base. The only preparation for this was to find out which docker image to use. Everything else is the same. I guess other distributions will work similarly if there is a docker image. ## Requirements - Windows 10/11 with WSL 2 support - docker cli - Docker Desktop or - Rancher Desktop with Container Engine: `dockerd (moby)` within the settings - Windows Terminal - Windows PowerShell Core (recommended) - Ubuntu (or any other available distribution from the Windows Store) on WSL installed ## Step by step This guide will not go much into details, as it should bring you fast to an up and running system. At the end there will be a condensed version, too. Use it when you understand the topics well. ## Preparations Make sure about the requirements, and your Linux distribution can run docker within. ## Login into WSL Open Windows Terminal, go into your WSL distribution and run following steps: - Pull image from docker hub: `docker pull manjarolinux/base` - Create a Manjaro Linux container: `docker create -i manjarolinux/base bash` - Copy the first eight or ten characters from the output. Example: `83398c22...` - Run the container: `docker container start 83398c22` - Run interactive shell with it: `docker exec -it 83398c22 /bin/bash` ## Within the Linux distribution You have run the bash shell and it should now show information by running: `whoami`. It will show your logged in user is `root`. ### Initialize `pacman` keys There might be an optional step, but I recommend it for Arch Linux distributions. This one is for Manjaro: ```shell pacman-key --refresh-keys pacman-key --populate archlinux manjaro ``` This can take time. For more information see: - [[HowTo] Solve Keyring Related Issues in Manjaro - Contributions / Tutorials - Manjaro Linux Forum](https://forum.manjaro.org/t/howto-solve-keyring-related-issues-in-manjaro/96949) - [Pacman troubleshooting - Manjaro#Errors_about_Keys](https://wiki.manjaro.org/index.php/Pacman_troubleshooting#Errors_about_Keys) ## Add user and admin rights Add a user within the `wheel` group and change its password: ```shell useradd -m -G wheel cyrdam passwd crydam ``` Why `wheel` group? The wheel group will get `sudo` privileges in the next steps. Next, the packages need to be up to date, and applications installed for user accounts: ```shell pacman -Syu pacman -Syu sudo vim ``` This will install `sudo` and `vim`. You can choose any other shell editor if you like, this will go with the following. Now we can add `sudo` privileges to the `wheel` group by running: ```shell EDITOR=vim visudo ``` And remove the hash character on the following line: `# %wheel ALL=(ALL) ALL` should look like `%wheel ALL=(ALL) ALL` Save and close `vim` by typing: `:wq` Exit the docker container with the command: ```shell exit ``` ## Export the docker image Create a directory on your hard drive from within the Linux distribution and go there. Then create a sub-folder for the `.vhdx` image file. You can choose whatever folder you like, but I recommend not to use a directory within the user directory. Example: ```shell cd /mnt/c/dev/wsl/ mkdir manjarolinux ``` The directory `manjarolinux` is there to store the `.vhdx` file and will be empty at this moment. Stay in the current folder and run this docker command: ```shell docker export 83398c22 > /mnt/c/dev/wsl/manjarolinux.tar ``` It will take seconds to write the container into the file given by the path. The container ID needs to be the same as at the beginning. You can get it by looking into the container list with: `docker container ls`. Now exit your Linux distribution: ```shell exit ``` ## Import the image into WSL From the Windows Terminal you go to the created directory and check if everything looks fine: ```shell cd C:\dev\wsl\ ls ``` There you see the `.tar` file and the empty directory `manjarolinux`. It is time to import the image into WSL: ```shell wsl --import Manjaro ./manjarolinux manjarolinux.tar ``` Explanation of the parts: - `wsl --import`: command to import a distribution - `Manjaro`: Name of the distribution in WSL (and Windows Terminal drop-down menu) - `./manjarolinux`: where to store the `.vhdx` image file - `manjarolinux.tar`: file to import Restart Windows Terminal. ## Update Windows Terminal settings In Windows Terminal open the drop-down menu and run `Manjaro` from there. It will run the user as `root`. Nobody wants that. Add a little start command to change it each time we run the Terminal Window. Go to the Windows Terminal settings and open the file version of the settings by clicking on the gear icon on the bottom left side of the window. Your editor for the file should open. Search for `Manjaro`. Add `commandline` into the settings: ```json { ... "name": "Manjaro", ... "commandline": "wsl.exe -u cyrdam -d Manjaro" } ``` You could also do it over GUI for the `Manjaro` profile. Close the editor for the file, restart Windows Terminal and open `Manjaro` again. Check the user with `whoami`. It should show you your defined user (in my case cyrdam). ## Optional step (aka signature) To have a pleasant view and proof of running Manjaro within WLS you can install `archey3` and present the output of your system, showing you also the kernel version from Microsoft. Run following commands: ```shell sudo pacman -Syu archey3 archey3 ``` See running Manjaro Linux in WSL on Windows. My output: ``` $ archey3 + OS: Arch Linux x86_64 # Hostname: name of the machine ### Kernel Release: 5.15.79.1-microsoft-standard-WSL2 ##### Uptime: 1:13 ###### WM: None ; #####; DE: None +##.##### Packages: 151 +########## RAM: 1436 MB / 15856 MB #############; Processor Type: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz ###############+ $EDITOR: None ####### ####### Root: 798M / 1007G (0%) (ext4) .######; ;###;`". .#######; ;#####. #########. .########` ######' '###### ;#### ####; ##' '## #' `# ``` ## Improvement thoughts To make this less time taking it would be possible to automate this entire process by a script with input capabilities. That means we would also run commands within the terminal and containers directly. I do not know how this could work, but here is a summary of all steps. ### Within Linux distribution Run Windows Terminal with your Linux distribution, then start with the commands. ```shell # Download image and create container docker pull manjarolinux/base docker create -i manjarolinux/base bash # Copy the first eight characters of the output (example: 83398c22) # Run the container docker container start 83398c22 # Login into the container with the bash shell docker exec -it 83398c22 /bin/bash ``` ## Within docker container ```shell # Update pacman keys (can take a while) pacman-key --refresh-keys pacman-key --populate archlinux manjaro # Add user useradd -m -G wheel cyrdam passwd crydam # Update packages, install sudo and editor pacman -Syu pacman -Syu sudo vim # Add user group to sudo user EDITOR=vim visudo # Edit the line: '# %wheel ALL=(ALL) ALL' > '%wheel ALL=(ALL) ALL' # Exit vim :wq # Exit docker container exit ``` ### Back in Linux distribution ```shell # Create the folder first if not available then go to cd /mnt/c/dev/wsl/ mkdir manjarolinux # Use the ID from before or search for it with: `docker container ls` docker export 83398c22 > /mnt/c/dev/wsl/manjarolinux.tar # Exit the Linux distribution exit ``` ### Within Windows PowerShell ```shell cd C:\dev\wsl\ # Check directory content for .tar file and empty folder ls # Import image wsl --import Manjaro ./manjarolinux manjarolinux.tar ``` ### Within Windows Terminal settings ```shell # Update settings for 'Manjaro' profile by adding following line "commandline": "wsl.exe -u cyrdam -d Manjaro" # Restart terminal # Open 'Manjaro' and check user is not root whoami ``` ### Within Manjaro ```shell sudo pacman -Syu archey3 archey3 ``` ## Summary If someone knows how to make this automated: - adding the image name (dynamic) - adding the distribution name for WSL (dynamic) - running all commands without issues or destroying anything then please share your knowledge. Is there a better solution? Currently, I can export my Manjaro Linux and share it on other machines if necessary: ```shell wsl --export <distributionsname> <TAR-file> wsl --import <distributionsname> <directory path to vhdx file> <TAR-file> ``` But be careful: you will also copy every file within the distribution, that also means project files that might not be for sharing because of legal reasons.
damian_cyrus
1,875,459
Secure, Fast, Decentralized: Penrose Network Derivatives Trading
Introduction Penrose Network is a groundbreaking decentralized platform that is set to transform...
0
2024-06-03T14:18:15
https://dev.to/penrose_network/secure-fast-decentralized-penrose-network-derivatives-trading-mkj
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbap0pjlo1hhaxh1ezm0.jpg) Introduction [Penrose Network](https://penrose.network) is a groundbreaking decentralized platform that is set to transform the landscape of derivatives trading. Catering to a diverse range of financial markets including cryptocurrencies, forex, stocks, and commodities, Penrose Network eliminates intermediaries and promotes a peer-to-peer trading environment. This unique approach offers traders unmatched security, efficiency, and transparency, setting a new standard in the financial trading sector. Core Components of Penrose Network Penrose Chain The Penrose Chain is the cornerstone of the Penrose Network, featuring a sophisticated dual-layer blockchain architecture. The first layer, a Directed Acyclic Graph (DAG)-based floating layer, processes transactions asynchronously, providing high scalability and speed. The second layer, the main chain, employs Delegated Proof of Stake (dPoS) to finalize and secure transactions, ensuring robust and reliable performance. Penrose Bridge A vital element of the network, the Penrose Bridge connects PenroseChain with EVM-compatible parent chains, such as Fantom. This bridge facilitates seamless asset transfers, withdrawals, and cross-chain governance functionalities through smart contracts. By maintaining interoperability and versatility, the Penrose Bridge ensures that the network can integrate smoothly with other blockchain ecosystems. Penrose Client Designed to enhance user experience, the Penrose Client is a user-friendly desktop and mobile application that simplifies interactions with PenroseChain. It provides real-time market data, portfolio management tools, automated trading options, and direct access to governance participation. Whether you are a novice or an experienced trader, the Penrose Client offers a comprehensive suite of tools to meet your needs. Penrose Token (PNRS) The native token of the [Penrose Network](https://penrose.network/), PNRS, plays a crucial role in the platform's operations. It is used for governance, staking in the dPoS system, and paying transaction fees. Additionally, PNRS incentivizes network participants and contributes to the security and efficiency of the platform, making it an integral component of the Penrose ecosystem. Transaction Types in Penrose Network Deposit Deposits on Penrose Network are system-generated transactions derived from intermediary smart contracts on the parent chain. These deposits, valued in USD-peg tokens, are essential for maintaining liquidity on the platform. Withdrawal Withdrawals are user-initiated transactions that can only be executed if the user's floating balance is at least 5% of their solid balance. These transactions are processed on the parent chain, with validators receiving rewards for submitting signed messages to confirm the transactions. Trade Trading on [Penrose Network](https://penrose.network) involves system-generated transactions that match buy and sell orders from the Order Pool. This mechanism ensures balanced trade volumes and includes liquidation orders triggered by under-collateralized positions. Transfer Asset transfers between accounts on PenroseChain are straightforward, similar to traditional blockchain transfers. This functionality allows users to move their assets seamlessly within the network. Fee Structure The fee structure on Penrose Network is designed to support network maintenance and sustainability. Fees are used to burn PNRS tokens and reward stakers, ensuring long-term demand for PNRS and maintaining network security. Consensus Mechanism Penrose Network employs a unique consensus mechanism that combines DAG and dPoS: DAG Layer: This layer allows transactions to be processed asynchronously, enhancing scalability and transaction speed. dPoS Main Chain: Validators, elected by PNRS stakers, batch and verify transactions, finalizing them in a secure and orderly manner. This combination ensures the network's integrity and consistency. Governance and DAO [Penrose Network](https://penrose.network) operates as a Decentralized Autonomous Organization (DAO), where PNRS token holders actively participate in governance. Key decisions, such as protocol upgrades and fee adjustments, are made through proposals and votes facilitated by smart contracts on the Penrose Bridge. This decentralized governance model ensures that the community has a direct impact on the network's evolution. Vision of Penrose Network Penrose Network aims to decentralize and revolutionize derivatives trading by eliminating the need for traditional intermediaries. The vision is to create a transparent, secure, and efficient trading environment that empowers traders and fosters innovation in the financial markets. Background of Penrose Network Unlike other decentralized trading platforms that rely on smart contracts within existing blockchains (such as Uniswap and other derivative trading platforms), Penrose Network operates on a dedicated blockchain. This design choice allows Penrose to optimize for derivatives trading, ensuring that transactions are native and executed with higher efficiency and security, without the overhead of virtual machines. Presale Information Join the Penrose presale and seize the opportunity to purchase PNRS tokens at an initial price of $0.001. This limited-time offer precedes the public sale, where the price will increase to $0.003. Tokens can be purchased on Fantom, BSC, and other supported networks. Act now to secure your position in the future of decentralized derivatives trading. Visit the presale page at Penrose Presale for more details. Conclusion Penrose Network represents the next generation of decentralized derivatives trading, combining innovative technology with a strong community-driven governance model. By leveraging a unique dual-layer blockchain architecture and the native PNRS token, Penrose aims to provide a secure, efficient, and transparent trading platform that empowers users and fosters financial innovation. Join us in revolutionizing the future of derivatives trading. FAQs What is Penrose Network? Penrose Network is a decentralized platform for derivatives trading across various financial markets, leveraging a dual-layer blockchain architecture for peer-to-peer transactions. How does Penrose Network differ from other decentralized trading platforms? Penrose Network operates on a dedicated blockchain optimized for derivatives, ensuring faster, more secure, and cost-effective transactions without the need for virtual machines. What is the Penrose Token (PNRS)? PNRS is the native token of Penrose Network, used for governance, staking, and transaction fees within the platform. How can I participate in the Penrose presale? Purchase PNRS tokens at $0.001 during the presale on Fantom, BSC, and other supported networks. Visit the presale page for details. What is the Penrose Bridge? The Penrose Bridge connects PenroseChain with an EVM-compatible parent chain, enabling seamless asset transfers and governance functionalities. What is the Penrose Client? A desktop and mobile application for real-time market data, portfolio management, automated trading tools, and governance participation. How does the dual-layer blockchain architecture work? The DAG-based floating layer processes transactions asynchronously, while the dPoS main chain finalizes and secures them. How are transactions verified on Penrose Network? Transactions are verified using the DAG structure and finalized on the main chain through the dPoS consensus mechanism. What types of transactions can I perform on Penrose Network? You can perform various transactions including buying and selling derivatives, transferring assets, withdrawing funds, and participating in governance. How does governance work in Penrose Network? Governance is managed through a DAO where PNRS token holders submit proposals and vote on key decisions via the Penrose Bridge. Your feedback matters! Please leave a review!
penrose_network
1,875,458
ChatGPT vs. Human Writers: A Comparative Study
In the ever-evolving landscape of digital content, a new contender has emerged that challenges the...
0
2024-06-03T14:17:54
https://dev.to/haider_ali_899f3a97f74c80/chatgpt-vs-human-writers-a-comparative-study-51a1
In the ever-evolving landscape of digital content, a new contender has emerged that challenges the long-standing dominance of human writers. ChatGPT, OpenAI's advanced language model, has burst onto the scene, sparking a heated debate: Can artificial intelligence outperform human creativity? In this comprehensive comparative study, we delve deep into the strengths, weaknesses, and unique attributes of both ChatGPT and human writers, offering data-driven insights to determine who holds the pen of power in today's content creation arena. ## The Rise of ChatGPT: A New Era in AI Writing [ChatGPT](https://chat.openai.com/), launched in November 2022, quickly became a sensation, amassing over a million users within its first five days. This staggering adoption rate signifies more than mere curiosity; it reflects a genuine interest in AI's potential to revolutionize writing. Built on the GPT (Generative Pre-trained Transformer) architecture, ChatGPT learns from a vast corpus of online text, enabling it to generate human-like responses across a wide range of topics and styles. ## Key ChatGPT Capabilities: Speed: Can produce a 1000-word article in under 2 minutes Versatility: Proficient in over 95% of writing genres Consistency: Maintains tone and style throughout long texts Multilingual: Writes fluently in over 50 languages ## The Human Touch: Creativity, Empathy, and Experience Despite ChatGPT's impressive capabilities, human writers bring unique qualities to the table that stem from our lived experiences, emotional intelligence, and cultural immersion. A study by the University of Oxford found that tasks requiring "high creative intelligence" are among the least likely to be automated, with writing scoring high in this category. ## Human Writer Strengths: Original Ideation: 78% of breakthrough ideas come from personal experiences Emotional Resonance: Humans are 80% more effective at crafting emotionally engaging content Cultural Nuance: 92% accuracy in using culturally appropriate metaphors and idioms Adaptive Style: Can tailor writing style in real-time based on audience reactions **Read also here:** [2131953663](https://elephantsands.com/the-hidden-significance-of-2131953663-revealed/) ## Quality Comparison: A Deep Dive Into Metrics To objectively compare ChatGPT and human writers, we conducted a study with 500 participants, evenly split between professional writers and AI enthusiasts. Each group was tasked with writing articles across five genres: news, creative fiction, technical guides, persuasive essays, and personal narratives. ## Grammatical Accuracy and Coherence ChatGPT: 99% grammatical accuracy, 95% coherence score Human Writers: 94% grammatical accuracy, 97% coherence score ChatGPT's near-perfect grammar is impressive, but humans edge out in coherence, especially in longer, more complex pieces. ## Information Accuracy ChatGPT: 88% accuracy in news articles, 72% in technical guides Human Writers: 95% accuracy in news articles, 98% in technical guides Human writers, particularly subject matter experts, significantly outperform ChatGPT in factual accuracy, especially in specialized domains. ### Engagement and Readability ChatGPT: Flesch-Kincaid score of 65 (8th-grade level), 3.5 min avg. reading time Human Writers: Flesch-Kincaid score varies (60-80), 4.2 min avg. reading time While ChatGPT maintains consistent readability, human writers adapt their style, sometimes crafting more challenging prose that keeps readers engaged longer. ### Creative Elements ChatGPT: 70% originality score, 65% emotional impact Human Writers: 85% originality score, 88% emotional impact In creative writing, humans significantly outshine ChatGPT, particularly in evoking emotional responses and crafting truly novel narratives. ## SEO Performance: Keywords vs. User Experience In the age of semantic search, Google's algorithms increasingly favor user experience over keyword density. We analyzed 1,000 articles (500 from each group) for their SEO performance. ## ChatGPT: Perfect keyword inclusion (100%) Lower user engagement (1.5 min avg. time on page) Higher bounce rate (65%) ## Human Writers: Strategic keyword use (85%) Higher user engagement (3.2 min avg. time on page) Lower bounce rate (45%) Despite ChatGPT's flawless keyword integration, human-written content keeps readers more engaged, a factor that increasingly influences search rankings. ## Cost and Efficiency Analysis ## ChatGPT: $0.002 per 1K tokens (~750 words) Production rate: 500 words/minute No breaks, 24/7 availability ## Human Writers: Average $0.10 per word ($75 for 750 words) Production rate: ~20 words/minute Requires breaks, typically 40 hrs/week In sheer output and cost, ChatGPT is unbeatable. A monthly ChatGPT-4 subscription ($20) could theoretically produce over 7.5 million words—equivalent to 100 full-time writers. However, this doesn't account for ideation, research, and editing time, where humans still dominate. ## Plagiarism and Ethical Concerns A critical issue with ChatGPT is its tendency to reproduce copyrighted material. In our study: ChatGPT: 12% of outputs contained significant verbatim text from web sources Human Writers: < 0.5% instances of accidental plagiarism Moreover, ChatGPT's training data, scraped from the internet without explicit consent, raises ethical questions about intellectual property and fair use. ## The Hybrid Future: Collaboration Over Competition As our study reveals, both ChatGPT and human writers have distinct advantages. Rather than a zero-sum game, the future likely lies in symbiosis. Companies like The Washington Post are pioneering this approach, using AI for data-driven reports while humans provide analysis and emotional depth. In another case study, a marketing agency used ChatGPT to generate initial drafts for 50 client blogs. Human writers then refined these drafts, focusing on brand voice, emotional appeal, and fact-checking. The result? A 40% increase in productivity without sacrificing quality. ## Conclusion: Our comparative study reveals that while ChatGPT excels in speed, grammatical precision, and consistent style, human writers maintain a strong edge in originality, emotional impact, factual accuracy, and cultural nuance. In the SEO arena, ChatGPT's keyword mastery is offset by the superior user engagement of human-crafted content. The question isn't "Can ChatGPT outperform human writers?" but rather, "How can ChatGPT and human writers complement each other?" As AI handles repetitive tasks and provides creative springboards, humans can focus on what they do best—injecting experiences, emotions, and ethical considerations into words. In this collaboration, we don't just preserve the art of writing; we elevate it. The future of content creation isn't a battle between pen and processor—it's a dance, where technology's rhythm and humanity's soul create a symphony of words that resonate, inform, and inspire. Meta Description (160 characters): ChatGPT vs. Human Writers: Who wins? Our data-driven study reveals strengths, weaknesses, and the surprising future where AI and humans collaborate in content creation.
haider_ali_899f3a97f74c80
1,875,457
Scopes/ Loops/ break/ continue
Scopes In JavaScript, scope refers to the current context of execution, which determines...
0
2024-06-03T14:16:04
https://dev.to/__khojiakbar__/scopes-loops-break-continue-4p9l
scopes, loops, break, continue
#**Scopes** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgcdc6ken3cgqn2p3yuv.jpeg) In JavaScript, scope refers to the current context of execution, which determines the accessibility of variables. There are mainly three types of scope: 1. Global Scope: - Variables declared outside any function or block have global scope. - They can be accessed from anywhere in the code. ``` var globalVar = "I'm a global variable"; function showGlobalVar() { console.log(globalVar); // Accessible } showGlobalVar(); // Logs: I'm a global variable ``` 2. Function Scope: - Variables declared within a function are in the function scope. - They can only be accessed within that function. ``` function myFunction() { var functionVar = "I'm a function variable"; console.log(functionVar); // Accessible } myFunction(); // Logs: I'm a function variable console.log(functionVar); // Error: functionVar is not defined ``` 3. Block Scope: - Introduced in ES6 with let and const. - Variables declared within a block (inside {}) have block scope. - They can only be accessed within that block. ``` if (true) { let blockVar = "I'm a block variable"; console.log(blockVar); // Accessible } console.log(blockVar); // Error: blockVar is not defined // ------------------------------------ Variable in global scope can access function scope but in global scope we can't call variable in function scope. const age = 26; const name = 'Khojiakbar'; function info(age) { // name = 'Muhammad' age = 33 console.log(`My name is ${name} and I am ${age} years old`); } info() console.log(`My name is ${name} and I am ${age} years old`); ``` **Key Points:** - **Global Scope:** Variables are accessible throughout the code. - **Function Scope:** Variables are accessible only within the function they are declared. - **Block Scope:** Variables are accessible only within the block they are declared (using let or const). ``` var globalVar = "global"; function myFunction() { var functionVar = "function"; if (true) { let blockVar = "block"; console.log(globalVar); // Accessible console.log(functionVar); // Accessible console.log(blockVar); // Accessible } console.log(globalVar); // Accessible console.log(functionVar); // Accessible console.log(blockVar); // Error: blockVar is not defined } myFunction(); console.log(globalVar); // Accessible console.log(functionVar); // Error: functionVar is not defined console.log(blockVar); // Error: blockVar is not defined ``` --- #**Loops** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6pfxmrv436rj2g5w94n8.png) **1. for Loop** > A for loop is ideal when you know how many times you want to execute a block of code. **Basic Structure:** ``` for (let i = 0; i < 5; i++) { console.log(i); // Outputs: 0, 1, 2, 3, 4 } ``` - Initialisation: let i = 0 sets the counter i to 0. - Condition: i < 5 runs the loop as long as i is less than 5. - Increment: i++ increases i by 1 after each loop iteration. **2. while Loop** > A while loop is used when you don't know in advance how many times you need to run the code. The loop continues as long as a condition is true. ``` let i = 0; while (i < 5) { console.log(i); // Outputs: 0, 1, 2, 3, 4 i++; // Increment the counter } ``` - Condition: i < 5 keeps the loop running as long as i is less than 5. - Increment: i++ inside the loop increases i by 1 each time the loop runs. **3. do...while Loop** > A do...while loop is similar to a while loop, but it guarantees that the code block executes at least once before checking the condition. ``` let i = 0; do { console.log(i); // Outputs: 0, 1, 2, 3, 4 i++; // Increment the counter } while (i < 5); ``` - First execution: The code block runs and outputs 0. - Condition: i < 5 is checked after the first execution and keeps the loop running as long as i is less than 5. **4. for...in Loop** > A for...in loop is used to iterate over the properties of an object. It loops through all the enumerable properties of an object. ``` const person = {name: "John", age: 30, city: "New York"}; for (let key in person) { console.log(key + ": " + person[key]); // Outputs: name: John, age: 30, city: New York } ``` - key: Each property name (name, age, city) is assigned to key in each loop iteration. - person[key]: Accesses the value of each property. **5. for...of Loop** > A for...of loop is used to iterate over the values of an iterable object, such as an array, string, map, set, etc. ``` const fruits = ["apple", "banana", "cherry"]; for (let fruit of fruits) { console.log(fruit); // Outputs: apple, banana, cherry } ``` - fruit: Each value from the fruits array is assigned to fruit in each loop iteration. ##Summary with Practical Examples: **Iterating Over Arrays:** ``` const numbers = [1, 2, 3, 4, 5]; for (let i = 0; i < numbers.length; i++) { console.log(numbers[i]); // Outputs: 1, 2, 3, 4, 5 } ``` **Iterating Over Object Properties:** ``` const car = {make: "Toyota", model: "Corolla", year: 2020}; for (let key in car) { console.log(`${key}: ${car[key]}`); // Outputs: make: Toyota, model: Corolla, year: 2020 } ``` **Handling Dynamic Conditions:** ``` let count = 0; while (count < 3) { console.log(count); // Outputs: 0, 1, 2 count++; } ``` **Ensuring Code Runs At Least Once:** ``` let index = 0; do { console.log(index); // Outputs: 0, 1, 2 index++; } while (index < 3); ``` **Iterating Over Values of Iterable Objects:** ``` const fruits = ["apple", "banana", "cherry"]; for (let fruit of fruits) { console.log(fruit); // Outputs: apple, banana, cherry } ``` --- #**break** Statement ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/khoinp6qhenof4ozuwaf.jpeg) > The break statement is used to exit a loop immediately, regardless of the loop's condition. When break is encountered, the loop stops, and control is passed to the statement following the loop. ``` for (let i = 0; i < 10; i++) { if (i === 5) { break; // Exit the loop when i equals 5 } console.log(i); // Outputs: 0, 1, 2, 3, 4 } ``` #**continue** Statement > The continue statement skips the rest of the code inside the current iteration of the loop and jumps to the next iteration. It does not terminate the loop but skips to the next iteration. ``` for (let i = 0; i < 10; i++) { if (i % 2 === 0) { continue; // Skip the rest of the loop iteration if i is even } console.log(i); // Outputs: 1, 3, 5, 7, 9 } ```
__khojiakbar__
1,875,456
Magia e Músculos: ETL com Mage e DuckDB com dados do meu treino de powerlifting
Mage No ultimo post que fiz, falei um pouco sobre um painel que construí utilizando...
0
2024-06-03T14:13:11
https://dev.to/deadpunnk/magia-e-musculos-etl-com-mage-e-duckdb-com-dados-do-meu-treino-de-powerlifting-239p
mage, python, sql, duckdb
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nabaj21vtu8ao1rfqs2a.jpeg) ## Mage No ultimo post que fiz, falei um pouco sobre um painel que construí utilizando Python e o Looker Studio, para visualizar os dados do meu treino de powerlifting. Nesse post vou passar pelo passo a passo de construir um pipeline de ETL (extração, transformação e carga), com os mesmos dataset do meu treino. Para a construção desse pipeline vamos usar o **[Mage](https://www.mage.ai/)** para orquestração, **Python** para carga e tratamento dos dados e por fim vamos exportar os dados para uma base do **[DuckDB](https://duckdb.org/)**. Para executar o **Mage** vamos usar a imagem oficial do framework: ```shell docker pull mageai/mageai:latest ``` O pipeline que vamos construir ficará assim: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2rqfeah6ugpxn3l1eweq.png) ## Extração A extração será simples, vamos ler um arquivo CSV usando a biblioteca pandas e utilizar um *DataFrame* para realizar as etapas seguintes. Usando o bloco *Data loader* conseguimos um template para ler o arquivo CSV e carregar para um DataFrame do Pandas. ```python from mage_ai.io.file import FileIO import pandas as pd if 'data_loader' not in globals():     from mage_ai.data_preparation.decorators import data_loader if 'test' not in globals():     from mage_ai.data_preparation.decorators import test @data_loader def load_data_from_file(*args, **kwargs):     filepath = 'default_repo/data_strong.csv'     df = pd.read_csv(filepath, sep=';')     return df @test def test_output(output, *args) -> None:     assert output is not None, 'The output is undefined' ``` ## Transformação Nesta etapa utilizamos o bloco *Transformer* que possui diversos templates para tratamento de dados, porem para o nosso caso vamos realizar uma transformação que não possui um template definido. A transformação que vamos realizar consiste no mapeamento da coluna *Exercise Name* para criação de uma nova coluna chamada *Body part* onde vamos poder identificar a que parte do corpo pertence cada um dos exercícios. ```python import pandas as pd if 'transformer' not in globals():     from mage_ai.data_preparation.decorators import transformer if 'test' not in globals():     from mage_ai.data_preparation.decorators import test body_part = {'Squat (Barbell)': 'Pernas',     'Bench Press (Barbell)': 'Peitoral',     'Deadlift (Barbell)': 'Costas',     'Triceps Pushdown (Cable - Straight Bar)': 'Bracos',     'Bent Over Row (Barbell)': 'Costas',     'Leg Press': 'Pernas',     'Overhead Press (Barbell)': 'Ombros',     'Romanian Deadlift (Barbell)': 'Costas',     'Lat Pulldown (Machine)': 'Costas',     'Bench Press (Dumbbell)': 'Peitoral',     'Skullcrusher (Dumbbell)': 'Bracos',     'Lying Leg Curl (Machine)': 'Pernas',     'Hammer Curl (Dumbbell)': 'Bracos',     'Overhead Press (Dumbbell)': 'Ombros',     'Lateral Raise (Dumbbell)': 'Ombros',     'Chest Press (Machine)': 'Peitoral',     'Incline Bench Press (Barbell)': 'Peitoral',     'Hip Thrust (Barbell)': 'Pernas',     'Agachamento Pausado ': 'Pernas',     'Larsen Press': 'Peitoral',     'Triceps Dip': 'Bracos',     'Farmers March ': 'Abdomen',     'Lat Pulldown (Cable)': 'Costas',     'Face Pull (Cable)': 'Ombros',     'Stiff Leg Deadlift (Barbell)': 'Pernas',     'Bulgarian Split Squat': 'Pernas',     'Front Squat (Barbell)': 'Pernas',     'Incline Bench Press (Dumbbell)': 'Peitoral',     'Reverse Fly (Dumbbell)': 'Ombros',     'Push Press': 'Ombros',     'Good Morning (Barbell)': 'Costas',     'Leg Extension (Machine)': 'Pernas',     'Standing Calf Raise (Smith Machine)': 'Pernas',     'Skullcrusher (Barbell)': 'Bracos',     'Strict Military Press (Barbell)': 'Ombros',     'Seated Leg Curl (Machine)': 'Pernas',     'Bench Press - Close Grip (Barbell)': 'Peitoral',     'Hip Adductor (Machine)': 'Pernas',     'Deficit Deadlift (Barbell)': 'Pernas',     'Sumo Deadlift (Barbell)': 'Costas',     'Box Squat (Barbell)': 'Pernas',     'Seated Row (Cable)': 'Costas',     'Bicep Curl (Dumbbell)': 'Bracos',     'Spotto Press': 'Peitoral',     'Incline Chest Fly (Dumbbell)': 'Peitoral',     'Incline Row (Dumbbell)': 'Costas'} @transformer def transform(data, *args, **kwargs):     strong_data = data[['Date', 'Workout Name', 'Exercise Name', 'Weight', 'Reps', 'Workout Duration']]     strong_data['Body part'] = strong_data['Exercise Name'].map(body_part)     return strong_data @test def test_output(output, *args) -> None:     assert output is not None, 'The output is undefined' ``` Uma característica interessante do Mage é que podemos visualizar de maneira bem fácil as mudanças que estamos fazendo, dentro do bloco *Transformer* na opção *Add chart* podemos gerar um gráfico de pizza que nos mostram as porcentagens da coluna *Body part*. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zybf558c4m2skrp4fqht.png) ## Carga Por fim, vamos carregar os dados com a nova coluna para nossa base no **DuckDB**, que por padrão já esta incluída na mesma imagem do **Docker** em que o **Mage** esta sendo executado. Usando o bloco *Data exporter* vamos selecionar o template do SQL para exportar os dados, e com uma única linha realizamos a inserção dos dados no **DuckDB** ```sql CREATE OR REPLACE TABLE powerlifting ( _date DATE, workout_name STRING, exercise_name STRING, weight STRING, reps STRING, workout_duration STRING, body_part STRING ); INSERT INTO powerlifting SELECT * FROM {{ df_1 }}; ``` Podemos visualizar a tabela, usando a opção *Add chart* do bloco *Data exporter*, e slecionando a opção Table. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q73jtyv61svzpuhrnct7.png) ## Conclusão **Mage** é um orquestrador bastante poderoso e flexível, que permite a integração de diversas tecnologias voltas a ETL, nesse post passamos apenas por uma rápida introdução do que é possível fazer com essa ferramenta, em posts futuros pretendo explorar mais a funcionalidades da ferramenta.
deadpunnk
1,843,521
Banco de dados: Modelo conceitual
Disclaimer: Post com intuito de gravar meu aprendizado em banco de dados. Não é uma verdade...
0
2024-06-03T14:11:51
https://dev.to/mmsfarias/banco-de-dados-modelo-conceitual-loo
database, beginners, sql, data
**Disclaimer:** Post com intuito de gravar meu aprendizado em banco de dados. Não é uma verdade absoluta. **Referência:** Sistemas de banco de dados, 7ª edição - _Navathe_ e _Elmasri_ --- 1. [Conceitos iniciais](#conceitos-iniciais) 2. [Modelo conceitual](#modelo-conceitual) 2.1 [Entidades](#entidades) 2.2 [Atributos](#atributos) 2.3 [Relacionamentos](#relacionamentos) 3. [Superclasse e subclasse](#superclasse-e-subclasse) 3.1 [Especialização](#especialização) 3.2 [Generalização](#generalização) 3.3 [Restrições](#restrições) --- ### Conceitos iniciais **Modelo de dados:** Mostra toda a estrutura (tipos, relacionamentos, restrições) de um banco de dados, abstraindo em recursos essenciais para que diferentes pessoas sejam capazes de entender. **Alguns dos modelos mais conhecidos:** Modelo conceitual e modelo físico. **1. Modelo conceitual:** Modelo de alto nível, tendo o foco usuários finais. Ou seja, não vai haver um aprofundamento em pontos mais técnicos (para programadores, DBAs), mas sim em entidades, atributos, restrições e relacionamentos. **2. Modelo físico:** Modelo de baixo nível que descreve mais a fundo sobre como os dados serão armazenados (formato de registro, caminhos de acesso) e o foco são pessoas técnicas. Existe o modelo **representativo** - ou **implementação** - que é o meio termo entre o conceitual e o físico. Aqui há um aprofundamento sobre os dados, mas ainda sim diferentes tipos de usuários podem ser capazes de compreender. **Esquema (descrição) de banco de dados:** Diz respeito a esquematização de restrições lógicas, nome de tabelas, campos das tabelas e seus tipos. Um esquema não deve ser alterado com frequência, apenas quando há mudanças de requisitos no projeto - chamado de **evolução do esquema** - Um banco de dados tem **estados**, eles são: * **Estado vazio:** Esquema sem nenhum registro inserido nele. * **Estado inicial:** Quando uma carga é feita na tabela Todo banco em determinado tempo terá seu estado atual, com os registros atuais, porém o estado do banco se altera a cada nova inserção, atualização ou exclusão de registros. Com isso, o SGBD tem que **garantir** o **estado válido** dos registros, ou seja, garantir que tudo esteja de acordo com a estrutura e restrições do esquema. Obs: Os registros são também chamados de instância dos objetos (entidades) ou ocorrências. --- ### Modelo conceitual O modelo conceitual trabalha com três bases: **entidade, atributos e relacionamentos** ![Primeiro desenho é o símbolo de um atributo simples, oval com uma linha. O segundo é o símbolo de um atributo multivalorado, oval dupla. O terceiro é o símbolo de uma entidade, um retângulo.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hg0q2us8kfwehnkoaez0.png) #### Entidades: É o objeto. Ex: Usuário, Departamento, Empresa e etc #### Atributos: São as propriedades (características) que descrevem esse objeto. Ex: Usuário -> Nome, CPF, E-mail. Os atributos podem ser de diversos tipos, entre eles: * **Composto** ou **Simples (Atômico):** Atributos compostos são aqueles que podem ser sub-divididos (quebrados) em partes até serem atributos não divisíveis. Então um atributo simples é um atributo que não é divisível. ![Imagem do atributo endereço que é composto e tem atributos simples (cidade e estado) e um outro atributo composto logradouro que se divide em rua, bairro e número](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ejgyb72g572k1qq87nd7.png) Não é obrigatório destrinchar os atributos compostos. Se um atributo for referenciado como um todo ou não especificado, é possível manter como um atributo simples. ![Imagem do atributo Nome que pode ser composto e se dividir em primeiro nome, nome do meio e sobrenome](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f5guopd6kxo23zsluya1.png) No exemplo acima, o atributo "Nome" pode ser composto, mas também pode ser referenciado como atributo simples e agrupar os outros atributos. Ex: Nome: Josué Henrique Santos. * **Valor único** ou **Multivalorado:** Um atributo de valor único é aquele atributo que recebe apenas um valor, enquanto o multivalorado pode ter um conjunto de valores, além de poder estabelecer uma restrições do mínimo e do máximo de valores possíveis. Ex: Uma pessoa tem apenas uma idade, por isso o atributo "Idade" é valor único. Ex²: Uma pessoa pode ter zero, uma ou mais graduações, por isso o atributo "Formação" é multivalorado. Além disso, poderia colocar que o máximo de valores possíveis é 3. * **Derivado** ou **Armazenado:** Um atributo derivado é aquele que deriva de um atributo armazenado. Também podendo derivar de outra entidade relacionada. Ex: Eu tenho o atributo "Idade", onde o valor dele é feito através do cálculo do dia atual com o valor do atributo "Data de nascimento". Ou seja, o atributo "Idade" é derivado, pois deriva do atributo "Data de nascimento", este sendo o atributo armazenado. As entidades podem ter um ou mais atributos que são **atributos chaves** que tem o objetivo de identificar, de forma única, cada entidade. Um atributo sozinho pode ser um atributo chave, porém dois ou mais atributos podem formar uma chave (as combinaçãos delas devem ser **únicas**) ![Uma entidade com um atributo chave chamado "registro" que é formado pelos atributos simples: estado e número](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zlspt8p2fejqggrw3yeg.png) Na imagem acima, o atributo chave "Registro" é formado pela combinação dos atributos simples, "Estado" e "Número". Esse tipo também é chamado de **atributo chave composto.** * Uma entidade pode ter nenhuma chave e é chamada de entidade fraca. **Exemplo do que foi descrito até agora:** ![Entidade departamento tem cinco atributos, sendo dois atributos-chave (nome e número), um atributo multivalorado (localizações), um que é derivado (gerente) e outro que é a data de início que esse gerente começou a atuar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1968pbqc7hq6ynw7mucj.png) * Uma entidade departamento tem cinco atributos, sendo dois atributos-chave (nome e número), um atributo multivalorado (localizações), um que é derivado (gerente) e outro que é a data de início que esse gerente começou a atuar #### Relacionamentos: Na fase inicial de um modelo ER, os relacionamentos existentes entre as entidades não precisam estar explícitos, pois isso é refinado durante a evolução do modelo. Existem alguns tipos de relacionamentos, entre eles o relacionamento **binário** (que é o mais comum) e o **terciário**, que corresponde a quantas entidades estão relacionadas. (Há relacionamentos mais complexos) Além disso, existe o relacionamento que é feito entre a mesma entidade. Ex: Um funcionário é supervisionado por outro funcionário. Nesse caso o relacionamento que existe é **autorrelacionado/recursivo.** ![Três exemplos de visualizar o autorrelacionamento. O padrão, o de pé de galinha e usando conjuntos](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2773euashchxp8c4w588.png) Há uma **restrição estrutural** que é a **restrição de participação** onde os relacionamentos entre as entidades podem ter uma **dependência de existência** ou não. * **Participação total:** Também conhecida como dependência de existência, a participação total é quando a existência de uma entidade depende dela estar ligada com outra. — **Todas as instâncias precisam estar no relacionamento** Ex: Todo funcionário precisa trabalhar para um departamento. Ou seja, é obrigatório que todas as instâncias do 'Funcionário' participem (estejam ligados) a uma instância de 'Departamento'. ![Entidade 'Funcionario' ligado a entidade 'Departamento' com linha dupla, que significa dependência total](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kqer4bl33y6k0vm9g5b7.png) A linha dupla representa que no relacionamento entre as entidades há a dependência total/de existência. * **Participação parcial:** Quando nem todas as instâncias de uma entidade precisam estar ligada com outra. Ex: Um funcionário pode gerenciar um departamento ou nenhum, ![Entidade 'Funcionario' ligado a entidade 'Departamento' com linha simples, que significa dependência parcial](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/75ua0h9bh4t5u59myr2d.png) Os relacionamentos com participação parcial mantém a linha simples. * **Cardinalidades:** 1. **1:1 - Um para Um:** Uma instância da entidade A está, no **máximo**, associada a uma instância da entidade B (e vice versa) * Ex: Uma pessoa está associada a apenas um CPF como documento oficial. Nesse caso, o atributo desse relacionamento pode estar na entidade 'Pessoa' ou na entidade 'Documento', por exemplo. ![Entidade 'Pessoa' e entidade 'Documento' com cardinalidade 1:1 e o atributo PESSOA_ID estando na entidade 'Documento'](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d5rhwkwja2eanv4vuy0t.png) 2.**N:1 ou 1:N - Um para Muitos:** Quando uma instância de uma entidade A pode estar associada a zero, uma ou várias instâncias de uma entidade B, porém uma instância da entidade B só pode estar associada a, no máximo, uma instância da entidade A. * Ex: Um departamento pode ter vários funcionários enquanto um funcionário só pode pertencer a um departamento. Nesse caso, o atributo relacionado fica, **somente**, na entidade com o lado **N** do relacionamento. ![Entidade 'Funcionario' e entidade 'Departamento' com cardinalidade 1:N e o atributo DEPARTAMENTO_ID na entidade 'Funcionario'](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7fawsn1a0p1azdwtweyc.png) 3.**N:M ou N:N - Muitos para muitos:** Várias instâncias de uma entidade A podem estar relacionadas a várias instâncias de uma entidade B (e vice-versa). * Ex: Um autor pode escrever vários livros e vários livros podem ser de um mesmo autor. Nesse caso, os atributos de relação são a **combinação das entidades** e **precisa** ter uma entidade intermediária. ![Entidade 'Livro' e entidade 'Autor' com cardinalidade N:N, onde gerou a entidade 'Livro_Autor' que contém o ID_AUTOR e ID_LIVRO](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uq60fqtr6kg7zyfvfv38.png) Durante o processo de modelagem de um banco de dados, atributos definidos no refinamento inicial pode se tornar uma entidade independente. Uma forma de avaliar isso é vendo se um atributo está presente em N entidades, dessa forma é possível criar uma entidade independente e evitar a redundância dos dados. Da mesma forma que o inverso pode ser feito. Uma entidade estabelecida no refinamento inicial pode se tornar um atributo, se essa entidade estiver relacionada a apenas uma entidade, por exemplo, ela pode se tornar um atributo dessa outra entidade. --- ### Superclasse e subclasse Também conhecido como supertipo e subtipo, esse conceito diz que um conjunto pode ter **subconjuntos** associados. Quando há necessidade, uma entidade é criada para explicitar esse subagrupamento: ![Entidade Funcionário com ligação as entidades Secretária, Gerente e Técnico](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/460j6drxe5u0lavmwevh.png) No exemplo acima, o funcionário é a superclasse e a secretária, técnico e gerente são as subclasses. Uma subclasse não pode existir sem estar relacionada a uma superclasse. Ex: Não é possível existir uma secretária sem ser membro da superclasse 'Funcionário'. Ou seja, uma entidade de subclasse pertence a entidade de subclasse e também a de superclasse. Então o subtipo herda os atributos de entidade do supertipo. #### Especialização Um conjunto de subtipos pode ser a especialização de um supertipo. Ex: Pode haver uma especizalição por 'tipo_cargo' da entidade base 'Funcionário'. ![Supertipo 'Funcionário' com uma linha ligando a um círculo com a letra 'd' no meio e duas linhas, com o símbolo U no meio, desse círculo conectando as entidades de subtipo 'Técnico' e 'Secretária' e ao lado uma ligação de 'Funcionário' com o subtipo 'Gerente', onde nesse caso só utiliza a linha com U](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9gw797jem9peihyhuwye.png) Os subtipos podem ter atributos normalmente, esses atributos são chamados de **atributos específicos (ou locais)**, além de que subclasses podem participar de **relacionamentos específicos**. #### Generalização A generalização é o processo contrário da especialização. Quando entidades tem vários atributos comuns, é possível generalizar esses atributos em uma entidade (que se torna uma superclasse). Ex: As entidades 'Carro' e 'Caminhão' pode fazer surgir a entidade 'Veiculo', que é uma entidade generalizada e a superclasse dessas outras entidades. Ou seja, no final pode-se olhar o processo como generalização/especialização, pois no final do processo ela se complementa. O que diferencia é a motivação/objetivo inicial da modelagem. #### Restrições * **Subclasse definida por predicado:** As subclasses são determinadas através de um atributo na superclasse. Ex: A entidade 'Funcionário' tem um atributo chamado 'tipo_emprego' que determina se o funcionário é do tipo 'Técnico', 'Secretária' etc. * **Definida pelo usuário:** Quando não se tem uma condição explícita para se definir. A separação depende dos usuários do banco de dados no processo da operação de inserção de uma entidade. * **Restrição de disjunção (desconexão):** As subclasses devem ser **disjuntas**, ou seja, devem pertencer a, **no máximo**, uma subclasse. Se as subclasses não forem disjuntas, pode se ter a sobreposição, onde é possível ser membro de mais de uma classe. ![Entidade superclasse 'Peça' que tem uma linha dupla ligado a um círculo com um 'o' de overlapping dentro e duas linhas distintas partindo desse círculo e ligando as entidades 'P_Fabricada' e 'P_Comprada'](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/phqujek2pjbbb4hzpooe.png) Quando há a letra o dentro círculo, significa o overlapping. A linha dupla vem do conceito a seguir: * **Restrição de completude:** A restrição de completude pode ser parcial ou total. Quando **total**, significa que **toda** entidade da superclasse precisa ser membro de pelo menos uma subclasse da especialização. Por isso a linha dupla. A restrição **parcial** significa que uma entidade **pode** não pertencer a nenhuma subclasse. ! Quando houver uma operação de exclusão de entidade da superclasse, isso implica que a entidade na subclasse também seja deletada.
mmsfarias
1,875,453
How to Secure the 5 Cloud Environments?
Organizations have several options for cloud deployments, each with distinct features and security...
0
2024-06-03T14:09:12
https://www.clouddefense.ai/how-to-secure-the-5-cloud-environments/
![How to Secure the 5 Cloud Environments?](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vpfvx5yzo1wzk18fdny0.jpg) Organizations have several options for cloud deployments, each with distinct features and security requirements. This article discusses five key cloud environments: public cloud, private cloud, hybrid cloud, multi-cloud, and multi-tenant cloud, detailing the best practices for securing each one. Public cloud environments, managed by external providers, offer scalable resources but necessitate user vigilance to safeguard access, applications, and data. Security threats such as weak access controls, inadequate logging, account compromise, data breaches, insecure APIs, DDoS attacks, and data loss can be mitigated through strong authentication, regular updates, continuous monitoring, comprehensive security policies, data classification, and staff training. Private cloud environments, designed for single organizations, provide enhanced control and privacy but pose risks such as outdated VM images, insider threats, and data loss. Security measures include optimizing access control, encrypting data, ensuring physical security, enhancing data privacy, using security tools, implementing two-factor authentication, and maintaining thorough monitoring and logging. Hybrid cloud environments, integrating on-premises data centers with public cloud services, face challenges like vendor compatibility, network integration, API security, data protection, visibility, security responsibilities, compliance, and skill gaps. Effective security strategies involve standardizing processes, consistent encryption, secure tool configuration, business continuity planning, access management, utilizing Cloud Workload Protection Platforms (CWPP), isolating critical systems, and employing Cloud Security Posture Management (CSPM). Multi-cloud environments, using services from multiple providers, introduce risks such as configuration vulnerabilities, limited visibility, complex incident response, and regulatory challenges. Enhancing security involves adopting CSPM, deploying cloud-native SIEM, implementing cloud-native guardrails, and using tools compatible with multiple clouds. Multi-tenant cloud environments, where one infrastructure serves multiple customers, present risks like data breaches, high downtime, configuration management issues, and insufficient visibility. Securing these environments requires robust access control, audit trails, compliance management, data encryption, data loss prevention (DLP), an incident response plan, regular patching, tenant isolation, and the use of cloud security tools. By emphasizing data protection and compliance, businesses can confidently leverage the advantages of cloud computing while addressing the specific security challenges of each cloud environment.
clouddefenseai
1,875,450
36 DevOps Testing Tools [2024]
DevOps is a combination of Development (Dev) and Operations (Ops) methodologies to ensure a...
0
2024-06-03T14:00:31
https://www.lambdatest.com/blog/devops-testing-tools/
devops, testing, tools, automation
DevOps is a combination of Development (Dev) and Operations (Ops) methodologies to ensure a continuous delivery of high-quality software. Testing in DevOps is integrated throughout the entire software development process. This integration ensures a continuous feedback loop, which enables quick detection and helps resolve any issues that arise. DevOps testing tools are important in streamlining testing procedures and encouraging developers and operations teams to address these concerns quickly. This facilitates a seamless [Continuous Integration and Continuous Delivery (CI/CD)](https://www.lambdatest.com/blog/what-is-continuous-integration-and-continuous-delivery/) pipeline. > ***Keep your JavaScript code safe from syntax errors with our free online [JS Escape](https://www.lambdatest.com/free-online-tools/js-escape?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) tool by quickly and easily converting special characters in your JavaScript. *** In this blog, we will discuss some of the top DevOps testing tools for 2024 to help you make informed decisions about your testing strategies, ensuring the smooth integration of testing into your DevOps workflows. ## DevOps Testing: An Overview DevOps testing is an automated approach designed to facilitate the continuous and swift delivery of top-notch software throughout the [Software Development Life Cycle (SDLC)](https://www.lambdatest.com/learning-hub/software-development-life-cycle?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub). The market size of [DevOps](https://www.lambdatest.com/blog/getting-started-with-devops/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) is anticipated to grow to USD 25.5 billion by 2028. This data indicates that most organizations will adopt DevOps for software development. This will increase the demand for DevOps testing in 2024. ![](https://cdn-images-1.medium.com/max/2000/0*QBRVbicOiAvA1pNo.png) Unlike conventional testing techniques that often involve manual execution, demanding more human involvement and being open to errors, DevOps testing offers a path towards quicker and more dependable software releases by [integrating testing](https://www.lambdatest.com/learning-hub/integration-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) within DevOps and automating the testing process. Therefore, you must know about the DevOps testing tools for 2024 to implement DevOps practice easily. However, the main challenge you may face is selecting the most suitable DevOps testing tools for your specific needs. DevOps testing tools emerge as crucial components by enhancing the essential automation and optimal methodologies to pinpoint and rectify issues quickly, ensuring the software aligns with the necessary quality standards. > ***Effortlessly convert [RGB to CMYK](https://www.lambdatest.com/free-online-tools/rgb-to-cmyk?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) format with our free online tool. Achieve precise color matching for your projects. Fast, accurate, and easy to use. *** ## DevOps Testing Tools DevOps testing tools are crucial in bringing DevOps practices to life, including the entire software development cycle — from code reviews, version control, and deployment to monitoring. In DevOps practice, [software testing tools](https://www.lambdatest.com/blog/software-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) are designed to automate the testing process. These tools are pivotal in expediting testing procedures by seamlessly incorporating them into the continuous integration and delivery (CI/CD) pipeline. DevOps testing tools include all the platforms, servers, tools, and applications used in the new SDLC. Hence, selecting the right DevOps tool: * Enhances and improves communication * Automates repetitive processes * Eliminates context switching * Utilizes software monitoring to deliver software more rapidly Therefore, DevOps testing tools ensure automation, transparency, and effective collaboration. It makes it easier for stakeholders in development, business, or security to exchange data and technical information quickly and deliver superior software applications to the users. So, now let us explore the top DevOps testing tools for 2024. Before we learn about the various tools, it is crucial to understand [DevOps automation](https://www.lambdatest.com/learning-hub/devops-automation?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub). This guide includes an overview of the automation process, how to integrate DevOps practices into your testing strategies, and more. ## List of DevOps Testing Tools Utilizing DevOps testing tools provides DevOps teams with various advantages. This includes improving the team’s code quality, speeding up the software application’s time to market in the DevOps pipeline, and delivering continuous and prompt feedback to enhance collaboration among all the teams involved. Look at the top DevOps testing tools for 2024: ## LambdaTest LambdaTest is an AI-powered test orchestration and execution platform that lets you run manual and automated tests at scale with over 3000+ real devices, browsers, and OS combinations. This platform is pivotal in boosting DevOps operations through its cloud testing framework. By enabling automated and scalable cross-browser testing, it becomes instrumental in guaranteeing the excellence and adaptability of web applications across various browsers and devices. This feature helps DevOps teams optimize their testing procedures, cutting down on manual work and hastening the release of top-notch software. While LambdaTest concentrates on the testing stage of the SDLC, it coordinates with other DevOps tools and methodologies by facilitating smooth and productive testing processes. ![](https://cdn-images-1.medium.com/max/2682/0*KlF7X9IjIsfwnb_u.png) **Why is LambdaTest one of the best DevOps testing tools?** * It supports various [automation testing frameworks](https://www.lambdatest.com/blog/automation-testing-frameworks/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) like Selenium, [Playwright](https://www.lambdatest.com/playwright?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), Cypress, Appium, [Espresso](https://www.lambdatest.com/espresso?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), [XCUITest](https://www.lambdatest.com/xcuitest?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), and more. * It integrates with Azure DevOps, enabling the creation of work tasks directly for your software project from this platform. * It offers the flexibility to capture direct screenshots from up to 25 browsers and operating systems online. * It enables [geolocation testing](https://www.lambdatest.com/blog/how-to-test-geolocation/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) across 53 locations worldwide. * It allows users to record videos, capturing essential aspects of the testing process for analysis. * It enables live interactive browser testing by selecting the browser and operating system environment that aligns with your requirements. * It incorporates an intelligent image-to-image comparison feature, facilitating the detection of visual discrepancies in a new build. It includes icon size, padding, color, layout, text, and element positioning. * It offers [HyperExecute](https://www.lambdatest.com/hyperexecute?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=webpage), LambdaTest’s AI-powered end-to-end test orchestration cloud, which delivers rapid [test automation](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=webpage), increasing speeds up to 70% faster than conventional cloud grids. * It efficiently integrates with over 200 project management and [bug-tracking tools](https://www.lambdatest.com/blog/bug-tracking-tools/). > **Convert ASCII codes to Text/String effortlessly with our free online [ASCII to Text](https://www.lambdatest.com/free-online-tools/ascii-to-text?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) Converter. Fast, accurate, and user-friendly. Get results in one click, perfect for development needs. ** ## Selenium [Selenium](https://www.lambdatest.com/selenium?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) is an automation testing tool widely used to perform web automation testing; it is a free and open-source tool designed for validating web applications across various browsers and platforms. Selenium stands as the top-tier DevOps testing tool. Enabling the automation of [web application testing](https://www.lambdatest.com/learning-hub/web-application-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) on browsers plays a pivotal role in streamlining the DevOps workflow. This automation cuts down on manual intervention and accelerates the testing pace while upholding uniformity in testing procedures across diverse settings. Collectively, these aspects enhance the efficiency and trustworthiness of the DevOps pipeline. ![](https://cdn-images-1.medium.com/max/2690/0*y_ah1hGv3I4azIwo.png) **Selenium is considered one of the best DevOps testing tools for several reasons:** * It supports various programming languages such as Java, Python, C#, PHP, Ruby, and JavaScript. * It enables [test execution](https://www.lambdatest.com/learning-hub/test-execution?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) on different operating systems, including Windows, Mac, or Linux. * It performs tests using Mozilla Firefox, Internet Explorer, Google Chrome, Safari, or Opera browsers. * It integrates with top automation testing frameworks like [TestNG](https://www.lambdatest.com/learning-hub/testng?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) for application testing and reporting. * It offers seamless integration with Jenkins for Continuous Integration and Continuous Development. * It is an open-source and portable solution. * It provides easy and user-friendly identification and utilization of WebElements. Enhance your Selenium 4 knowledge by watching this detailed video tutorial to gain valuable insights. Learn about Selenium’s functionality, new features, and architecture. {% youtube mMStkc3W9jY %} > **Convert String/[ text to ASCII](https://www.lambdatest.com/free-online-tools/text-to-ascii?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) codes. It is simple to use, precise and produces fast results. ** ## Cypress [Cypress](https://www.lambdatest.com/learning-hub/cypress-tutorial?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) is an [automation testing tool](https://www.lambdatest.com/blog/automation-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) involved in DevOps practice and helps build modern web applications. Its primary objective is to address the challenges developers and QA engineers face while testing modern web applications. It runs the test concurrently with your code and allows changes to be fixed during development. This framework is well-suited for DevOps testing tools, as it is easy to set up and allows developers to integrate with CI/CD pipelines quickly. ![](https://cdn-images-1.medium.com/max/2644/0*9eagXXwmUx068Uyp.png) **Cypress is considered one of the best DevOps testing tools for several reasons:** * It allows you to write faster, easier, and more reliable tests. * It supports various browsers and facilitates effortless extension through custom commands, enabling automation across diverse facets of the development cycle. * It validates tests in real-time directly within the browser, enhancing the efficiency of the testing process. * It incorporates built-in commands that expand the framework’s functionality. * It simplifies test execution across devices and operating systems through its user-friendly Command Line Interface (CLI). * It has numerous third-party plugins that allow seamless integration for added features. If you are new to Cypress and want to upscale your Cypress automation testing experience, watch the detailed video below and get complete details on Cypress, its functionalities, features, methods, and more. {% youtube jX3v3N6oN5M %} ## Puppeteer The [Puppeteer](https://www.lambdatest.com/puppeteer?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) framework is a Node.js library offering sophisticated API for managing headless Chrome or Chromium through the DevTools Protocol. By leveraging Puppeteer, you can streamline website automation and testing processes, avoiding the complexities associated with the WebDriver protocol. By integrating DevOps testing practices. ![](https://cdn-images-1.medium.com/max/2672/0*iodlqGz27qW-WxoH.png) **Puppeteer is considered one of the best DevOps testing tools for several reasons:** * It uses Chromium’s capability to operate with an empty window. It allows it to perform [headless browser testing](https://www.lambdatest.com/learning-hub/headless-browser-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub). This proves beneficial in scenarios where browser output visibility is restricted (e.g., in CI servers) or automated execution is the primary objective. * It serves as a high-level web scraping tool that captures the content of each link, facilitating the recursive download of linked pages. * It ensures compatibility with Chrome and Chromium by using Puppeteer’s Blink rendering engine. * It allows you to operate Puppeteer on Mac, Windows, and Linux. > **Discover LambdaTest’s [Reverse Text](https://www.lambdatest.com/free-online-tools/reverse-text-generator?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) Generator: the ultimate online tool for effortlessly reversing your texts & enhancing creativity. Perfect for coders, writers, & SEO experts. Try it now! ** ## Appium [Appium](https://www.lambdatest.com/appium?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) is an open-source [mobile automation testing tool](https://www.lambdatest.com/blog/best-mobile-automation-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) compatible with Android and iOS devices. Its adaptability enables users to test diverse app types, from native to web to hybrid applications. It has become a popular choice in DevOps due to its ability to automate functional testing and enhance the overall functionality of applications. By implementing DevOps practices and using the DevOps testing tool with Appium, teams can significantly improve the reliability of their [mobile app testing](https://www.lambdatest.com/mobile-app-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=webpage). Appium automation streamlines the testing process, eliminates manual tasks, and ensures consistent testing across various platforms and devices. Integrating Appium into the DevOps workflow enables teams to release software faster and with greater reliability, ultimately enhancing software quality and user satisfaction. ![](https://cdn-images-1.medium.com/max/2684/0*CWdmEYxgCLsXvnEw.png) **Appium is considered one of the best DevOps testing tools for several reasons:** * Its notable feature is its support for automated tests on [emulators and simulators](https://www.lambdatest.com/blog/app-testing-on-emulator-simulator/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog), which are actively integrated into the DevOps process. * It provides flexibility and supports [end-to-end testing](https://www.lambdatest.com/learning-hub/end-to-end-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) in multiple programming languages, including Java, JavaScript, Node.js, Python, Ruby, and C#. * It supports testing on both iOS and Android devices using the same API. * It has a recorded and plays feature, allowing testers to expedite the testing process and create [test scripts](https://www.lambdatest.com/learning-hub/test-scripts?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) in various programming languages. * It seamlessly integrates with CI servers, taking automation testing to a higher level and ensuring a smooth integration into the development workflow. Learn how to use Appium to enhance your mobile app testing process. Watch this detailed video tutorial, gain valuable insights, and get started with your mobile app automation. {% youtube pI5zrUhydyo %} ## Mocha [Mocha](https://www.lambdatest.com/mocha-js?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), an open-source [JavaScript test framework](https://www.lambdatest.com/blog/best-javascript-frameworks/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) developed on Node.js and compatible with browsers, executes asynchronous DevOps testing. This allows the execution of additional tasks in the background. It delivers complete reporting on the tests that have passed, facilitating the early identification of the source of bugs. ![](https://cdn-images-1.medium.com/max/2678/0*Cbr_y8n7Lgt8J_sa.png) **Mocha is considered one of the best DevOps testing tools for several reasons:** * Its clean and simple syntax for writing tests helps developers maintain the [test suite](https://www.lambdatest.com/learning-hub/test-suite?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub). * It supports Behavior-driven Development (BDD) and [Test-Driven Development (TDD)](https://www.lambdatest.com/learning-hub/test-driven-development?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) styles. * It does not have its assertion, but using this with assertion libraries like Chai helps you choose the preferred assertion style and library. * It provides “before,” “after,” “beforeEach,” and “afterEach” hooks, allowing developers to set up preconditions or clean up after tests. * It can be integrated with CI/CD pipelines that allow the execution of DevOps tests when there are codebase changes. Learn everything about what Mocha is and how to implement it. Watch the video tutorial below and get valuable details. {% youtube xQCNEH2hZt4 %} > **Effortlessly identify differences in your JSON files with LambdaTest’s [JSON Compare tool](https://www.lambdatest.com/free-online-tools/json-compare?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools). Quick, accurate, and user-friendly, it’s the perfect solution for developers and QA engineers to streamline their workflows and ensure data integrity. ** ## Cucumber Cucumber is not just any tool but a framework that uses [Behavior-Driven Development (BDD)](https://www.lambdatest.com/learning-hub/behavior-driven-development?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), where the [test scenarios](https://www.lambdatest.com/learning-hub/test-scenario?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) are written in human-readable language, making them easily understandable for non-technical individuals. This feature makes Cucumber well-suited for DevOps testing, as it helps streamline the [software development process](https://www.lambdatest.com/learning-hub/software-development-process?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub). ![](https://cdn-images-1.medium.com/max/2690/0*GBgISKCovs_C4f3V.png) **Cucumber is considered one of the best DevOps testing tools for several reasons:** * It strongly emphasizes the end-user experience, ensuring testing meets user expectations. * It is very easy to write [test cases](https://www.lambdatest.com/learning-hub/test-case?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) in Cucumber, which simplifies the testing workflow. * It effectively addresses the communication challenges between technical and non-technical team members in a project, fostering collaboration. * It identifies the exact match for each step from the steps definition (code file), enhancing precision in testing. * It offers a detailed setup process for the [testing environment](https://www.lambdatest.com/blog/what-is-test-environment/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog), contributing to ease of use and efficiency in testing procedures. * It allows writing feature files in a human-readable Gherkin syntax. ## SpecFlow [SpecFlow](https://www.lambdatest.com/blog/specflow-tutorial-for-automation-testing/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) is a test automation solution for .NET, rooted in Behavior-Driven Development (BDD) principles. This DevOps testing tool can define, manage, and automate human-readable acceptance tests in both .NET projects, including .NET Core. It allows easy development of collaboration among teams to define software behavior through natural language specifications written in Gherkin syntax. ![](https://cdn-images-1.medium.com/max/2680/0*Kk7qOaKDV-sc3GDU.png) **SpecFlow is considered one of the best DevOps testing tools for several reasons:** * It integrates with widely used .NET development tools like Microsoft Visual Studio. * It offers compatibility with the command line for integration with a build server. * It uses the official Gherkin parser and supports over 70 languages, ensuring flexibility in test specification. * Its tests are linked to your application code through bindings, providing the flexibility to execute tests using the testing framework of your choice. * It provides its dedicated test runner, SpecFlow+ Runner, offering additional flexibility and control over the testing process. > **Convert Octal numbers to binary format with ease using our free online [Octal to Binary](https://www.lambdatest.com/free-online-tools/octal-to-binary?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) Converter tool. Perfect for developers, engineers, and students. Give it a try! ** ## Jenkins Jenkins is one of the most widely used tools in today’s market, specifically designed for continuous integration. This DevOps testing tool is written in Java and has an important role in building and testing software projects that help in incorporating necessary changes to their projects. This tool goes beyond by focusing on the continuous delivery of software applications, integrating a diverse range of testing and deployment software. ![](https://cdn-images-1.medium.com/max/2702/0*OPsOf3UCwSCHP3cx.png) **Jenkins is considered one of the best DevOps testing tools for several reasons:** * It is an open-source tool with a robust community supporting its development. * It is compatible with Windows, Mac, and Unix machines, supporting all operating systems and versions of Linux, Mac OS, or Windows. * It ensures easy installation, as Jenkins is delivered as a WAR file. * It allows users to set up the WAR file in their JEE container. * It facilitates the integration of various DevOps stages with various plugins available. Learn everything about Jenkins and make your testing process efficient. Watch this video tutorial and gain detailed insights on how Jenkins can be integrated with your testing framework. {% youtube nCKxl7Q_20I %} ## GitLab Being one of the [best CI/CD tools](https://www.lambdatest.com/blog/best-ci-cd-tools/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog), it plays a crucial role in automating the entire software delivery pipeline, encompassing code changes to deployment, all within a single integrated environment. This automation ensures the rapid and reliable delivery of software updates. Its CI/CD automatically builds and tests code changes, enabling teams to detect and address issues early in the development cycle. ![](https://cdn-images-1.medium.com/max/3200/0*FwVlVRJ3G8Q3Dy6u.png) **GitLab CI/CD is considered one of the best DevOps testing tools for several reasons:** * It offers a fast code deployment and development system. * It is easy to learn and use. * It enables project team members to integrate their work daily, making it easier to identify integration errors through automated builds. * It allows you to execute jobs faster by setting up your runner, an application that processes builds with all dependencies pre-installed. * It offers cost-effective and secure GitLab CI solutions with flexible costs based on the machine used to run it. > ***Need to convert CSV to JSON? Try our free [CSV to JSON](https://www.lambdatest.com/free-online-tools/csv-to-json?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) converter tool to convert your CSV files to JSON format. Simple and easy to use. Try it now for free! *** ## Bamboo This tool is an Atlassian product; it serves as a key player in DevOps testing by streamlining the processes of automatic build, test, and release within a unified environment. It supports a variety of technologies and languages, including Docker, Git, SVN, Mercurial, and Amazon S3 buckets; Bamboo additionally integrates with other Atlassian tools, such as Jira and Confluence. ![](https://cdn-images-1.medium.com/max/2540/0*Kg0uhL_UiVm54kIp.png) **Bamboo is considered one of the best DevOps testing tools for several reasons:** * It offers a continuous integration (CI) platform in self-hosted and cloud configurations. * It distinguishes itself from other CI tools and provides a user-friendly drag-and-drop interface for easy configuration of development workflows and seamless orchestration of tests at each stage. * Its integration is maintained with Jira, Bitbucket, and other tools within Atlassian, along with access to a vast marketplace of plugins for enhanced integration with your tool stack. * It integrates with Docker and AWS CodeDeploy to facilitate Continuous Delivery. * It provides users access to consolidated release management and build status, ensuring end-to-end quality monitoring within a single platform. ## Docker It is a well-known DevOps testing tool that accelerates and simplifies various software development life cycle workflows collaboratively. It supports the design and operation of container-based distributed applications, allowing DevOps teams to work together effectively. It helps exchange container images, application development, and collaborative efforts, enabling users to develop programs from modular components. ![](https://cdn-images-1.medium.com/max/3200/0*zLtNRdT1rkcWHRUZ.png) **Docket is considered one of the best DevOps testing tools for several reasons:** * Its isolated containers allow concurrent execution of multiple Docker environments, with the tool providing reusable data volumes for performance testing. * Its scalable source code is supported on both Linux and Windows. * It allows users to execute, manage, and package deployed applications using the Docker app. * It enables packaging applications for consistent operation in diverse environments, spanning on-premises, Azure, AWS, or Google. * It features a container runtime compatible with operating systems such as Windows and Linux servers. In [automation testing](https://www.lambdatest.com/learning-hub/automation-testing), ensuring that tests in one environment do not disrupt tests in other environments. Tests should run independently to achieve this. Docker is a valuable tool for meeting this critical requirement. With Docker, you can run tests using various automation testing tools like Selenium, Cypress, Playwright, and more, helping you maintain quality code, achieve high test coverage, and ultimately develop a high-quality product. To start using Docker with an automation tool like Selenium, follow this blog on how to [run Selenium tests in Docker](https://www.lambdatest.com/blog/run-selenium-tests-in-docker/). This guide covers all the details to help you use Docker with Selenium. > **Need to convert [XML to CSV](https://www.lambdatest.com/free-online-tools/xml-to-csv?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools)? Simplify your data conversion process with our XML to CSV Converter tool. Convert your XML files to CSV format quickly and easily in seconds! ** ## Typemock Typemock is a [unit testing framework](https://www.lambdatest.com/blog/unit-testing-frameworks/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) uniquely designed to provide robust support for legacy code, available on both Windows and Linux for C++ and .NET for Microsoft Visual Studio. It has essential features such as coverage reports to identify areas without [test coverage](https://www.lambdatest.com/learning-hub/test-coverage?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), suggestions for new test cases, real-time code reviews highlighting coverage gaps, and insights into potential issues within your code, making it well-suited for being in the list of DevOps testing tools. ![](https://cdn-images-1.medium.com/max/2676/0*SbB1j7XPwjRqN0yI.png) **Typemock is considered one of the best DevOps testing tools for several reasons:** * It provides a [code coverage](https://www.lambdatest.com/learning-hub/code-coverage?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) capability that shows your coverage within your editor while writing scripts. * It includes three significant testing features: Isolator for C++, isolator for .NET, and isolator for Build Server. * It suggests features automatically provide relevant test suggestions suitable for your code, streamlining the testing process. * It is well-suited for integration with TFS and VSTS, making it an excellent choice for building agents operating on TFS Update 2 and higher or VSTS. ## Apache JMeter This is one of the most used DevOps testing tools, specifically designed for websites; this load testing tool finds utility within the DevOps methodology. It is an open-source load testing tool that works on Java and offers features of a user-friendly graphical interface. This DevOps testing tool measures web applications’ and various services’ performance and load functional behavior. ![](https://cdn-images-1.medium.com/max/2692/0*wix3htzvmqa-AG5_.png) **Apache JMeter is considered one of the best DevOps testing tools for several reasons:** * It is known for its high extensibility and proficiency in loading performance tests across different server types: * Web: HTTP, HTTPS, SOAP, * Database: JDBC, LDAP, JMS, and * Mail: POP3. * It stores its test plans in XML format, allowing users to generate them through a text editor. * It has an interactive GUI, and JMeter supports various testing approaches, including [load testing](https://www.lambdatest.com/learning-hub/load-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), distributed testing, and [functional testing](https://www.lambdatest.com/learning-hub/functional-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub). * It is compatible with any environment or workstation accepting a Java virtual machine, such as Windows, Linux, and Mac. * It can effectively simulate multiple users through virtual or unique users by supporting protocols like HTTP, JDBC, LDAP, SOAP, JMS, and FTP. * It is a multi-threading framework allowing concurrent and simultaneous sampling of different functions through numerous thread groups. > **Looking to convert binary to hex? Convert binary numbers to hex with ease using our free online [Binary to Hex](https://www.lambdatest.com/free-online-tools/binary-to-hex?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) Converter tool. Perfect for developers and coders. ** ## K6 It is a DevOps testing tool that performs load, performance, and reliability testing. Whether deployed in the cloud or accessed through open-source channels, its core focus lies in automating tests that measure performance objectives, with the added convenience of accepting JavaScript-written test cases for a smoother onboarding experience. ![](https://cdn-images-1.medium.com/max/2686/0*wwWC975rFPxhTtuF.png) **K6 is considered one of the best DevOps testing tools for several reasons:** * It offers plugins compatible with various DevOps tools such as GitHub, Bamboo, Jenkins, and more than 20 other integrations, ensuring seamless integration into your existing tool stack. * It allows users to transcend the traditional boundaries of the QA silo by enabling early and continuous performance testing. * Its capabilities extend to fault injection, infrastructure, and [synthetic testing](https://www.lambdatest.com/learning-hub/synthetic-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub). * It integrates with IDE extensions, converters, result stores, visualization systems, CI/CD environments, and test management solutions. If you wish to learn how to use k6 to validate your application performance, follow this detailed tutorial on [k6 testing](https://www.lambdatest.com/blog/k6-testing-tutorial/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) and get complete insights on its functionality, features, benefits, and more. ## SoapUI This DevOps testing tool is used to perform functional testing, [security testing](https://www.lambdatest.com/learning-hub/security-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), load testing, and web services testing. It offers drag-and-drop features for creating test suites, step definitions, and requests, eliminating the need for manual test script coding. ![](https://cdn-images-1.medium.com/max/2674/0*87t6cCCQLlyse5l8.png) **SoapUI is considered one of the best DevOps testing tools for several reasons:** * It operates as a cross-platform, free, open-source [API testing tool](https://www.lambdatest.com/blog/api-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog), and SoapUI considers SOAP and REST APIs. * It creates and executes tests on APIs, ensuring the application functions as intended and can withstand high traffic. * Its [debugging](https://www.lambdatest.com/learning-hub/debugging?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) interface streamlines the process of tracking test flow, variables, properties, requests, context, and more, enhancing the efficiency of test creation. * It enables reading [test data](https://www.lambdatest.com/learning-hub/test-data?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) from external sources, including Excel, XML, JDBC, and Files. * Web Services Coverage helps dynamically analyze how well functional tests cover your SOAP or REST Service contract. > **Want to convert ASCII to hexadecimal? With our [ASCII to hexa](https://www.lambdatest.com/free-online-tools/ascii-to-hex?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) converter to convert text to its hexadecimal equivalent. Get started today and streamline your workflow! ** ## Parasoft It is a DevOps testing tool that performs unit and static analysis. Parasoft performs code coverage for files written in C and C++. It automatically evaluates code against various industry and security standards, flagging potential compliance issues. ![](https://cdn-images-1.medium.com/max/2666/0*qGNavSkdiJNlpwEf.png) **Parasoft is considered one of the best DevOps testing tools for several reasons:** * It incorporates AI and machine learning capabilities to enhance test case design and execution. * It integrates with numerous cloud providers, Integrated Development Environments (IDEs), CI/CD tools, and other DevOps testing solutions, ensuring compatibility with your tech stack. * It features multi-metric code coverage analysis, a stubbing and C mocking framework, and automated cross-platform execution. * It monitors the runtime during the execution of host-based or embedded applications or running unit tests in C or C++. * It utilizes the most comprehensive set of C/C++ static code analysis techniques. ## SimpleTest It is not an open-source DevOps testing tool, but it is specifically designed to emulate the characteristics of popular unit testing tools and apply these functionalities to PHP applications. This tool is built around test case classes written as extensions of base test case classes. ![](https://cdn-images-1.medium.com/max/2666/0*h6VzV3jJBeeQvcau.png) **SimpleTest is considered one of the best DevOps testing tools for several reasons:** * Its notable features include incorporating mock objects for more efficient testing with reduced resource usage. * It offers internal web browsers for simulating user interactions within a web application. * It includes actions such as signing up for a newsletter through a form. * It supports SSL, proxies, and basic authentication test cases. * It mirrors the application’s structure, with tests scripted in the PHP language. > **Convert your JSON files to CSV format in seconds with our easy-to-use [JSON to CSV ](https://www.lambdatest.com/free-online-tools/json-to-csv?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools)Converter. It’s fast, reliable, and user-friendly, making data conversion simple. )** ## Predator Predator operates as a DevOps testing tool, performing load tests across various application instances. It seamlessly integrates with major platforms like Kubernetes, DC/OS, and Docker, providing real-time reporting on test results. Additionally, Predator enables test data storage in Cassandra, Postgres, MySQL, MSSQL, and SQLite formats. ![](https://cdn-images-1.medium.com/max/2678/0*DQESvH2_3OCIqY1j.png) **Predator is considered one of the best DevOps testing tools for several reasons:** * It is a distributed, open-source [performance testing](https://www.lambdatest.com/learning-hub/performance-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) platform tailored for APIs. * It provides cloud resources, allowing unlimited tests with numerous instances and virtual instances. * It delivers comprehensive performance metrics, and its configuration allows for the automatic loading of your API through scheduled tests. * It integrates with custom dashboards, test metrics, and service logs, Simplifying the process of identifying performance bugs. * It ensures real-time reports to keep users informed on ongoing performance evaluations. ## Watir It is an open-source tool that streamlines [UI testing](https://www.lambdatest.com/learning-hub/ui-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) across major web browsers by harnessing the Selenium framework and the Ruby programming language. Using Watir, one can replicate human behavior through commands like “click the button” and “open the dropdown,” with this tool, you can efficiently convert these shorthand commands into complete scripts. This allows Watir to be applied to applications written in any programming language. ![](https://cdn-images-1.medium.com/max/2656/0*gR-V8law1XVb5D3S.png) **Watir is considered one of the best DevOps testing tools for several reasons:** * It automatically captures screenshots upon completion of testing. * Its page performance verification is facilitated through objects like *performance.navigation*, *performance.timing*, *performance.memory*, and *performance.time* origin, all directly connected to the browser. * It simplifies the testing of file downloads from the UI. * It provides user-friendly APIs for testing alerts and popups. > ***Convert String and [text to ASCII](https://www.lambdatest.com/free-online-tools/text-to-ascii?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) codes. It is simple to use, precise and produces fast results. *** ## TestComplete It is a [UI testing tool](https://www.lambdatest.com/blog/best-ui-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) with versatile desktop, web, and mobile application capabilities. Its AI-powered features enable seamless test automation across various operating systems, offering the flexibility to define tests with or without custom scripts. It further provides parallel test execution and real-time feedback during test execution. Moreover, It establishes over a dozen integrations with popular DevOps tools like Bamboo, Jenkins, and other automation testing frameworks. ![](https://cdn-images-1.medium.com/max/2682/0*djOZ30xQtIE_6skX.png) **TestComplete is considered one of the best DevOps testing tools for several reasons:** * It is a DevOps testing tool that offers user-friendly GUI test automation capabilities. * It integrates with capabilities that extend to DevOps, [test management](https://www.lambdatest.com/learning-hub/test-management?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), version control, and bug-tracking tools. * It supports script writing in VBScript, JavaScript, C++, or Python. Additionally, it includes a recording and playback feature. * It is particularly beneficial for applications with dynamic user interfaces, and it uses an object recognition engine to detect all elements of dynamic interfaces. ## TestProject It is a free automation testing tool designed to enhance the test automation experience for web and mobile environments, making it the best DevOps testing tool. Its community-driven tool aims to provide effective, industry-standard test automation capabilities accessible to any team of testers. ![](https://cdn-images-1.medium.com/max/2684/0*W0pMr0370khVfKLH.png) **TestProject is considered one of the best DevOps testing tools for several reasons:** * It supports Android, iOS, and all major web browsers. * Its test cases can be written using TestProject’s SDK tool or recorded directly in the browser. The flexibility extends to sharing all cases with other team members. * It offers seamless integrations with open-source automation frameworks like Selenium and Appium. * It features a built-in recording capability, allowing users to share and reuse recorded steps across various test cases. * It integrates with popular CI/CD tools like GitHub, GitLab, Jenkins, CircleCI, Slack, and TeamCity. > **Discover LambdaTest’s [Reverse Text](https://www.lambdatest.com/free-online-tools/reverse-text-generator?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) Generator: the ultimate online tool for effortlessly reversing your texts & enhancing creativity. Perfect for coders, writers, & SEO experts. Try it now! ** ## Leapwork It is an automation platform strategically designed to simplify test automation, particularly for individuals without coding expertise. The platform features a user-friendly visual dashboard without scripting language, allowing users to write tests through flowcharts. Leapwork’s offering extends to running tests across web applications, local machines, virtual machines, and legacy mainframes. This platform can be considered the best DevOps testing tool with all these easy-to-use features. ![](https://cdn-images-1.medium.com/max/2698/0*vs10rd4bJLHkoHPX.png) **Leapwork is considered one of the best DevOps testing tools for several reasons:** * Its automation platform allows beginners to create tests effortlessly using visual flow charts, bypassing the need for coding experience or scripting languages. * It can execute tests across diverse local and virtual machines and web applications. * It offers flexible test scheduling to accommodate varying needs. * It facilitates workflow management through a dashboard, providing comprehensive visibility and enabling users to track and revert changes. * It utilizes [Selenium Grid](https://www.lambdatest.com/learning-hub/selenium-grid?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) and supports the execution of parallel tests. * Its compatibility with various DevOps and CI/CD tools further enhances Leapwork’s integration capabilities. ## Tosca This DevOps testing tool was developed by Tricentis. It is a test platform designed to remove traditional bottlenecks in the testing process. Utilizing AI-powered engines, the platform recommends the most effective test cases and uses model-based test automation to enhance scalability and reusability. Furthermore, Tosca integrates into the DevOps pipeline, improving compatibility with over 160 supported technologies and plugins. ![](https://cdn-images-1.medium.com/max/2696/0*aoiYUNzX5NcbIdSo.png) **Tosca is considered one of the best DevOps testing tools for several reasons:** * It uses Vision AI to automate testing for applications that were previously considered challenging to automate. * It integrates with platforms like Atlassian Jira, Zendesk Suite, Jenkins, ServiceNow, and Azure DevOps Services. * Its intelligent test automation is leveraged to optimize and accelerate end-to-end testing throughout the software development lifecycle. * Its API scan solution streamlines [API testing](https://www.lambdatest.com/learning-hub/api-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) with a codeless capability, enhancing testing speed and improving test stability. * Its notable feature is accessibility testing, helping optimize application usability by validating web app accessibility compliance against WCAG 2.0 and AA measures. > **Effortlessly identify differences in your JSON files with LambdaTest’s [JSON Compare](https://www.lambdatest.com/free-online-tools/json-compare?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) tool. Quick, accurate, and user-friendly, it’s the perfect solution for developers and QA engineers to streamline their workflows and ensure data integrity. ** ## AppVerify AppVerify is a continuous testing tool, including the entire DevOps lifecycle. Its scope includes functional testing, performance testing capabilities, application monitoring, and robotic process automation. With AppVerify, creating UX test cases is simplified, as users can interact with the app, and this tool converts these actions into scripts. Test cases can be expanded by providing specifications, allowing AppVerify to account for all possible script outcomes without requiring custom code. ![](https://cdn-images-1.medium.com/max/2686/0*lKLK_KHVBjjIlAFc.png) **AppVerify is considered one of the best DevOps testing tools for several reasons:** * It enables the testing of any application by replicating the same UI across various access points, including thin clients, fat clients, and web portals. * It replicates users’ interactions with the application. It provides valuable metrics about end users’ experiences, with screenshots of failures. * Its scripts are automatically generated as users interact with the application and can be easily edited without coding. * It seamlessly extends to the performance testing and application monitoring modules, offering a cohesive testing and monitoring experience. ## Opkey This DevOps testing tool is a no-code test automation platform for both business and technical users, streamlining the automation of applications. It is designed for packaged apps like Oracle, Workday, Coupa, and Salesforce. This tool streamlines end-to-end testing efforts across modern enterprise architectures. ![](https://cdn-images-1.medium.com/max/2694/0*tSfL_oJFOP-GJuJW.png) **Opkey is considered one of the best DevOps testing tools for several reasons:** * It incorporates process mining and change impact analysis features. * It is compatible with more than 15 packaged apps and 150 technologies, allowing users to execute single and cross-app tests seamlessly through no-code automation. * It provides a repository of up to 30,000 pre-built tests, enabling users to rapidly automate tests and achieve a remarkable 90% test coverage within hours. * It provides users with scheduling tests, auto-generating reports, collaborative features across teams, and rapid detection of test failures. * It streamlines the test creation process with a drag-and-drop test builder, empowering users to automate tests easily, regardless of complexity. > **Effortlessly compare and find differences between texts with our Online [Text Compare](https://www.lambdatest.com/free-online-tools/text-compare?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) tool. Ideal for developers and writers, this String Diff utility simplifies your txt comparison tasks, ensuring accuracy and efficiency. ** ## EMMA EMMA serves as a unit testing framework designed specifically for Java applications. The tool strongly emphasizes tracking test case coverage and identifying areas where additional cases can be implemented. It is an open-source tool that enhances rapid code file evaluation and easy installation and integration processes, ensuring swift deployment and feedback. ![](https://cdn-images-1.medium.com/max/2684/0*ZFaPClm1gkhh6uZN.png) **EMMA is considered one of the best DevOps testing tools for several reasons:** * It is exceptionally fast, with minimal runtime overhead for added tools and a swift bytecode tool. This characteristic makes it an ideal software testing solution for agile teams. * It provides flexibility by offering up to three output report formats: plain text, HTML, and XML. * It is 100% pure Java, eliminating external library dependencies. It seamlessly operates in any Java 2 JVM, contributing to its versatility and compatibility. ## Raygun It is a mostly used tool for software development and DevOps, ensuring application reliability and performance. It is a monitoring and error-tracking platform that empowers development teams to identify, diagnose, and resolve real-time issues. It provides actionable insights into application errors and performance bottlenecks. It enables organizations to proactively address issues, reduce downtime, and enhance user satisfaction, leading to higher software quality and improved customer experiences. ![](https://cdn-images-1.medium.com/max/3200/0*9ZcnB2vAftvQZBJp.png) **Raygun is considered one of the best DevOps testing tools for several reasons:** * It provides real-time alerting capabilities to notify teams about critical issues or performance anomalies. * It offers customizable dashboards that visually represent application performance and error trends. * It includes customer experience monitoring features that allow teams to track user interactions and identify areas for improvement. * It provides deployment tracking capabilities to help teams correlate application performance with code changes. * It prioritizes security and offers features to help teams identify and address security vulnerabilities in their applications. * It provides API specifications that allow teams to integrate this tool into your existing workflows and customize their monitoring and error-tracking processes. > **Discover and evaluate JSONPath expressions with ease using the [JSONPath tester](https://www.lambdatest.com/free-online-tools/jsonpath-tester?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) tool. Perfect for quick debugging and testing. Try for free now! ** ## Functionize Functionize is a DevOps testing tool leveraging AI to equip teams with end-to-end tests that operate at scale in the cloud. This approach facilitates accelerated testing, reduced costs, and enhanced quality within CI/CD environments. ![](https://cdn-images-1.medium.com/max/3200/0*wIJ9eFL6N7ChB382.png) This AI-powered platform adopts a unique big data approach, having more stable tests. Additionally, it empowers teams and future-proofs skills by introducing low-code intelligent tests, eliminating the necessity for highly technical individuals to automate testing. **Functionize is considered one of the best DevOps testing tools for several reasons:** * It offers a bug-tracking feature for users to easily track and manage bugs throughout the testing process. * It offers a user-friendly test recorder, making it easy for testers to create and modify tests without writing new code. * It allows users to execute multiple tests simultaneously for faster and more efficient testing. ## QF-Test QF-Test is a DevOps testing tool and cross-platform software designed for GUI test automation. It automates various technologies, including Java/Swing, SWT, Eclipse plug-ins, RCP applications, Java applets, Java Web Start, ULC, and cross-browser test automation for static and dynamic web-based applications. It specializes in HTML and AJAX frameworks such as ExtJS, GWT, GXT, RAP, Qooxdoo, Vaadin, PrimeFaces, ICEfaces, and ZK. ![](https://cdn-images-1.medium.com/max/3110/0*RdaFQNyz3WIaXeXW.png) **QF-Test is considered one of the best DevOps testing tools for several reasons:** * It allows you to create and replay automated tests easily through record/replay without requiring any programming skills. * It offers robust component recognition for both computer and cell phone devices. * It allows you to create tests in common scripting languages like Jython, Groovy, and JavaScript. * It allows you to monitor daily to ensure the quality of existing functionalities through automated tests. ## Vagrant Vagrant is a DevOps testing tool that seamlessly constructs and oversees virtual machine environments within a unified workflow. With a specific workflow and a primary emphasis on automation, Vagrant effectively reduces the setup time for development environments and enhances production parity. This tool enables developers to quickly create and manage virtual environments that closely mimic production settings. ![](https://cdn-images-1.medium.com/max/3200/0*-2s1KJhuXe3xflTP.png) **Vagrant is considered one of the best DevOps testing tools for several reasons:** * It allows you to define and manage multiple virtual machines within a single Vagrantfile, which is useful for simulating complex network topologies or distributed systems. * It supports versioning and sharing base images, known as “boxes,” ensuring consistency among team members. * It integrates with tools like Puppet, Chef, and Ansible for automating setup and configuration. > **Easily convert Strings to JSON format with [String to JSON](https://www.lambdatest.com/free-online-tools/string-to-json?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) converter. Quick, accurate, and user-friendly interface for developers and professionals. ** ## PagerDuty PagerDuty is a DevOps testing tool that enhances businesses’ brand reputation by providing an incident management solution that supports CI strategy. This tool is very helpful in helping teams deliver high-performing applications. It enables the quick identification and resolution of incidents, ensuring continuous availability and optimal application performance. ![](https://cdn-images-1.medium.com/max/3200/0*62jls77hTdZcv9jB.png) **PagerDuty is considered one of the best DevOps testing tools for several reasons:** * It centralizes incident management, allowing teams to quickly detect, respond to, and resolve incidents. * It facilitates on-call scheduling, ensuring the right person is notified at the right time in case of an incident. * It provides alerting and notification capabilities, notifying team members via various channels (e.g., SMS, phone call, email) when incidents occur. * It offers analytics and reporting features, allowing teams to analyze incident trends, measure response times, and identify areas for improvement. > **Convert your text to lowercase with our reliable and easy-to-use online tool. Get started now and save time on editing. Try [text lowercase](https://www.lambdatest.com/free-online-tools/text-lowercase?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) tool for free. ** ## Snort Snort, a robust free and open-source tool, is crucial in detecting intruders and highlighting malicious attacks against the system. Its capabilities include real-time traffic analysis and packet logging, making it applicable in the DevOps methodology to bolster the security of both the application and the infrastructure. ![](https://cdn-images-1.medium.com/max/3200/0*P32yPS1jU5DXgwi2.png) **Snort is considered one of the best DevOps testing tools for several reasons:** * It captures and logs network traffic, allowing you to analyze packets for suspicious activity. * It can analyze network protocols to detect anomalies or deviations from expected behavior. * It can be integrated with other security tools and systems, enhancing overall security posture. ## Stackify Retrace This DevOps testing tool is under continuous testing tools, offering real-time logs, error queries, and more directly into the workstation. With its intelligent orchestration for the software-defined data center, Stackify Retrace empowers teams to identify and resolve issues swiftly, ensuring consistent availability and performance of the application. ![](https://cdn-images-1.medium.com/max/3200/0*C2tA138tTPf3qj-R.png) **Stackify is considered one of the best DevOps testing tools for several reasons:** * It provides real-time insights into application performance, including response times, database queries, and external service calls. * Its error tracking feature captures and logs application errors, providing detailed information to help diagnose and fix issues quickly. * Its deployment tracking feature helps you track and monitor application deployments, making identifying issues introduced by new code releases easier. > [**Text Uppercase](https://www.lambdatest.com/free-online-tools/text-uppercase?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) is a free online tool for converting text into uppercase. Enter your text and click the convert button to make all lowercase letters upper case. ** ## Ansible Ansible is an open-source automation tool that finds its place in IT configuration management, application deployment, and task automation within the DevOps methodology. It excels in automating repetitive tasks and ensuring consistency in the application deployment process. It enables teams to manage and scale infrastructure effortlessly, making it a popular choice among DevOps teams. ![](https://cdn-images-1.medium.com/max/3200/0*0zLrZGE1fM5iWMS4.png) **Ansible is considered one of the best DevOps testing tools for several reasons:** * It provides flexible execution environments, allowing users to define and manage the runtime environment for their automation tasks. * It offers analytics capabilities and integration with Red Hat Insights for monitoring and optimizing automation performance and efficiency. * It supports the development and testing of automation workflows, including an integrated development environment (IDE) and a command-line interface (CLI). ## Nagios It is an open-source DevOps tool widely used to monitor software applications during testing processes. This tool also monitors the working of the infrastructure of the software application, allowing easy and early identification of bugs and issues. It gives an alert when any issue is raised during the testing process. ![](https://cdn-images-1.medium.com/max/3200/0*RkV825MfcRNG6Nkz.png) **Nagios is considered one of the best DevOps testing tools for several reasons:** * It can monitor network services such as SMTP, POP3, HTTP, NNTP, PING, and many others. * It allows users to extend its functionality to monitor virtually any device or application. * It can generate performance graphs to help users visualize trends and identify potential issues. * It can automatically respond to problems by executing scripts or restarting services. In addition to the well-known and widely used DevOps testing tools, other [DevOps monitoring tools](https://www.lambdatest.com/blog/devops-monitoring-tools/?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=blog) may prove useful based on your project needs for performing comprehensive DevOps testing. > **A free online tool to generate strings of text and repeat them up to 25 time. try [text repeater](https://www.lambdatest.com/free-online-tools/text-repeater?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) tool now! ** ## Other Bonus DevOps Testing Tools Now that we have covered the most commonly used tools, let’s explore some emerging tools in the field of DevOps testing that can be utilized based on your project needs. ## Puppet It is an open source automation DevOps tool for IT configuration management. ![](https://cdn-images-1.medium.com/max/3200/0*P-xotZ33S6xRBSqO.png) This tool allows for the automation of repetitive tests, maintaining consistency in the application deployment process. You can easily manage and scale the infrastructure, making it a good tool among DevOps teams. ## Terraform It is also an open-source DevOps tool that allows provisioning and managing infrastructure resources consistently and automatically. ![](https://cdn-images-1.medium.com/max/3200/0*VxuQ_ngy0ggBLWYs.png) It allows you to define and handle infrastructure easily as code. > **Rotate your text and add some creativity to your words with our Text Rotator tool. Try it out now and see the results in just a few clicks! ** ## SaltStack SaltStack is an open-source remote execution and configuration management tool that finds its application in the DevOps methodology for automating the configuration and management of servers. ![](https://cdn-images-1.medium.com/max/3200/0*qjBkjihGx0TfCso9.png) With SaltStack, teams can effortlessly handle and scale infrastructure, establishing it as a favored tool among DevOps teams. ## Qunit QUnit emerges as a JavaScript unit test framework, operating similarly to JUnit but distinguishing itself in the code it operates upon. ![](https://cdn-images-1.medium.com/max/3200/0*1pdmxH9r4uJrVaSf.png) Notably utilized in the jQuery project, QUnit plays a crucial role in testing jQuery, jQueryUI, and jQuery Mobile. > **Don’t waste your time manually converting decimal numbers to Roman numerals. Use our [Decimal to Roman](https://www.lambdatest.com/free-online-tools/decimal-to-roman?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=free_online_tools) Converter tool to convert decimal numbers to Roman numerals and get results instantly. ** ## Gauntlt It is an automated security testing framework that offers various hooks to security tools and ensures they remain within its security scope. ![](https://cdn-images-1.medium.com/max/3200/0*uuS3shMgVtYbUQVp.png) It is positioned as a common CI security framework and supports various frequently used security testing tools. It facilitates a broad spectrum of tests, from simple port scans to more intricate and intrusive evaluations like SQL injections. ## Sahi Pro Sahi, an automation testing tool designed for web applications, is available in open-source and proprietary versions. It is an open-source record-and-playback web application [regression testing](https://www.lambdatest.com/learning-hub/regression-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub) tool; Sahi operates independently of the browser and operating system. ![](https://cdn-images-1.medium.com/max/3200/0*AuqCAnUjiwXpnunY.png) This unique feature allows users to write test scripts once for a browser and seamlessly run them across multiple operating systems and browsers without significant setbacks. ## FitNesse It is a DevOps software tool that seamlessly fits into different aspects of the DevOps strategy. It operates as a Web Server, a Wiki, and an automated testing tool for software. ![](https://cdn-images-1.medium.com/max/3200/0*hEhyR-vB--BYCgIe.png) Rooted in the integration testing framework developed by Ward Cunningham, FitNesse is purpose-built to support [acceptance testing](https://www.lambdatest.com/learning-hub/acceptance-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_03&utm_term=vs&utm_content=learning_hub), going beyond mere reliance on unit testing. In this context, it is the go-to resource for clearly describing system functions. ## pytest It is a mature and feature-rich testing tool designed for the Python programming language. More than just a testing tool, [pytest](https://www.lambdatest.com/learning-hub/pytest-tutorial) improves Python code quality. ![](https://cdn-images-1.medium.com/max/3200/0*gFh3phpYDYwdg1UX.png) Positioned as a run-anything, no-boilerplate, and no-required-APIs test framework for Python, pytest brings an element of enjoyment to the testing process. Its advantages lie in simplicity, automatic test discovery, modular fixtures, and intelligent error output, making it an ideal choice as a test framework. > **Need to know how many characters are in your text? Get an accurate count of the characters count in your text with our free online tool. Try it now and see how easy it is! ** ## Gatling Gatling, an open-source load testing framework built on technologies like Scala, Akka, and Netty, is custom-built for performing load tests and analyzing the performance of various services. ![](https://cdn-images-1.medium.com/max/3200/0*FtrR697_RZ_3oG58.png) Its primary focus lies in executing load tests on web applications. Hence, Gatling is designed to execute performance tests as integral components of Production code. ## Conclusion Finding the most suitable DevOps tool can be overwhelming. To help in your decision-making process, we’ve curated a list of DevOps testing tools, offering insights on their top features that are crucial to your needs. This blog aims to provide a comprehensive and exhaustive list of all current testing-related DevOps tools, accompanied by detailed descriptions. Additionally, we outline the fundamental features observed when these tools are implemented. However, It’s essential to acknowledge that no single DevOps testing tool includes all the necessary capabilities for supporting a DevOps practice. Therefore, several key factors must be considered when selecting tools aligned with your organization’s or product’s specific needs and DevOps objectives. Consider the software or application platforms and the underlying infrastructure technology, ensuring compatibility with on-premises, cloud, or hybrid environments. Verify that the chosen tools seamlessly integrate with various DevOps pipeline technologies, existing development tools, project management platforms, and Integrated Development Environments (IDEs).
nazneenahmad
1,875,449
Insights for young investors: Top tips to kickstart your investment journey
Embarking on an investment journey can be thrilling yet daunting, especially for young and new...
0
2024-06-03T14:00:13
https://dev.to/eldadtamir/insights-for-young-investors-top-tips-to-kickstart-your-investment-journey-2mlf
ai, investors, younginvestors, data
Embarking on an investment journey can be thrilling yet daunting, especially for young and new investors. Starting early is crucial, allowing time to harness the power of compound interest and to recover from market fluctuations. Recognizing the unique challenges faced by new investors, such as balancing risk in a volatile market, is the first step towards overcoming them. ## **1. Embrace long-term investing** Long-term vision is key. Short-term trading strategies like day trading are often ineffective. Maintaining equity investments for many years is essential to ride out market volatility and benefit from the growth trajectory of well-chosen stocks. The value of patience and consistency cannot be overstated. By making recurring investments in the market, investors can capitalize on the power of compounding interest and long-term market growth. Staying invested through market ups and downs allows investors to smooth out potential losses and enhance gains, avoiding the pitfalls of market timing. ## **2. Strategic stock selection: Think big, think data** Selecting the right stocks involves focusing on large, stable companies and indices like the S&P 500, enhanced by sophisticated data analytics. When venturing into smaller stocks, extensive data and thorough analysis are necessary to make informed decisions and avoid speculative risks. ## **3. Stocks over bonds: Maximizing long-term returns** For those seeking long-term growth, stocks are often favored over bonds. Bonds have a limited upside and should not be a primary focus for younger investors who are better positioned to weather the volatility of the stock market for higher returns. ## **4. Managing risks with intelligence and diversification** Effective risk management involves more than just avoiding potential losses—it requires intelligent diversification and a deep understanding of asset interactions under different market conditions. Comprehensive data helps shape a diversified investment strategy that mitigates risk while positioning for growth. A diversified portfolio and thoughtful asset allocation are foundational concepts in investing. These principles aim to balance the risks and rewards by spreading investments across different asset types and sectors, helping reach long-term financial goals. ## **5. Overcoming emotional investing: Learning from common mistakes** Fear is often the biggest enemy of successful investing. Maintaining discipline and relying on data-driven insights to guide investment choices is crucial, steering clear of emotional decision-making. Common mistakes include chasing high returns quickly without proper research, overlooking the impact of fees on investment returns, and making emotional decisions during market dips. Learning from these mistakes can save a lot of headaches and money. ## **Conclusion: Building a diverse and resilient portfolio** Understanding investment basics, setting realistic financial goals, and employing strategic risk management are all enhanced by a disciplined approach. As you embark on your investment journey, applying these principles can help you build a robust, diversified portfolio that is well-aligned with your long-term financial objectives.
eldadtamir
1,875,448
Serverless Workloads on Kubernetes: A Deep Dive with Knative
The ever-evolving landscape of cloud computing demands agile and scalable solutions for application...
0
2024-06-03T13:57:35
https://dev.to/platform_engineers/serverless-workloads-on-kubernetes-a-deep-dive-with-knative-21f
The ever-evolving landscape of cloud computing demands agile and scalable solutions for application development and deployment. Serverless computing offers a compelling paradigm shift, abstracting away server management tasks and enabling developers to focus solely on application logic. However, traditional serverless platforms often lack the flexibility and control desired by some organizations, particularly those already invested in Kubernetes. This is where Knative emerges as a powerful bridge, facilitating the execution of serverless workloads on top of Kubernetes infrastructure. ### Knative: Orchestrating Serverless on Kubernetes Knative is an open-source project that extends Kubernetes with a set of primitives specifically designed for serverless development and deployment. It provides a higher-level abstraction layer for managing serverless applications as Kubernetes resources, enabling developers to leverage familiar Kubernetes concepts and tools. Knative consists of several core components that work together to orchestrate the lifecycle of serverless functions and event-driven applications: * **Knative Serving:** This component acts as the heart of serverless functionality within Knative. It manages deployments and scaling of containerized serverless functions, exposing them via HTTP routes or event triggers. Developers define serverless functions as containerized applications and package them using container images. Knative Serving utilizes Kubernetes deployments and Horizontal Pod Autoscalers (HPA) to manage the lifecycle and scaling of these functions based on traffic patterns. ```yaml # Sample Knative Serving configuration for a function apiVersion: serving.knative.dev/v1 kind: Service metadata: name: my-function spec: template: spec: containerSpec: image: my-function-image # Function logic resides within the container ``` * **Knative Eventing:** This component facilitates event-driven communication between microservices and serverless functions. It defines a standardized event model and infrastructure for delivering events to interested consumers. Knative Eventing leverages Kubernetes custom resources for defining event types, sources, and sinks. Event sources like Kafka or cloud pub/sub can be integrated to produce events, while serverless functions can act as event consumers, triggered by incoming events. ```yaml # Sample Knative Eventing configuration for an event source apiVersion: eventing.knative.dev/v1 kind: EventSource metadata: name: my-event-source spec: # Configure connection details for the event source (e.g., Kafka) ``` * **Knative Build:** This component provides a framework for container image building and integration with source code repositories. It streamlines the process of building container images for serverless functions from source code, enabling developers to define build configurations using Knative Build resources. These core components, along with other extensions like Knative Queue and Kourier, provide a comprehensive framework for managing the entire lifecycle of serverless functions and event-driven applications on Kubernetes. ### Advantages of Knative for Serverless Workloads While traditional serverless platforms offer convenience, Knative brings several advantages for organizations invested in Kubernetes: * **Kubernetes Integration:** Knative leverages the established Kubernetes ecosystem, allowing developers to utilize familiar tools and concepts like deployments, HPA, and secrets management. This integration minimizes the learning curve and simplifies serverless development within existing Kubernetes environments. * **Portability:** Knative applications are defined as Kubernetes resources, making them inherently portable across different Kubernetes clusters. This enables easier migration of serverless workloads between cloud providers or on-premises deployments with compatible Kubernetes infrastructure. * **Flexibility:** Knative offers fine-grained control over serverless functions. Developers can customize resource allocation, container configurations, and scaling behavior using familiar Kubernetes constructs. This level of control caters to a wider range of application needs compared to some serverless platforms. * **Extensibility:** The modular architecture of Knative allows for extensibility and customization. Developers can leverage additional Knative extensions for specific functionalities like serverless workflows or advanced eventing patterns. ### Challenges and Considerations While Knative offers a compelling solution for serverless on Kubernetes, some considerations require attention: * **[Platform Engineering](www.platformengineers.io) Overhead:** Knative introduces additional complexity to the platform compared to managed serverless platforms. Platform engineers need to manage and maintain the underlying Kubernetes infrastructure and Knative components, requiring additional expertise. * **Development and Operational Complexity:** Developing and operating serverless functions on Knative might involve a steeper learning curve compared to using higher-level abstractions offered by some serverless platforms. Developers need to understand Kubernetes concepts and Knative constructs to effectively build and manage serverless applications. * **Debugging and Observability:** Debugging serverless functions running on Kubernetes can be more challenging compared to managed serverless platforms. Additional tools and techniques are needed to gain insights into function execution and identify potential issues.
shahangita
1,874,848
React Concepts: Hook Proximity
Keeping your component's side effects (like data fetching, subscriptions, or logging) close to where they are triggered, avoiding the pitfalls of prop drilling.
0
2024-06-03T13:53:26
https://dev.to/clschnei/react-concepts-hook-proximity-43jj
reactconcepts, react, javascript, hooks
--- title: React Concepts: Hook Proximity published: true description: Keeping your component's side effects (like data fetching, subscriptions, or logging) close to where they are triggered, avoiding the pitfalls of prop drilling. tags: reactconcepts, react, javascript, hooks cover_image: https://domf5oio6qrcr.cloudfront.net/medialibrary/12416/65b0e235-2aa8-4138-9e1e-12463b8c6f4e.jpg --- ## Why Hook Proximity? [Prop drilling](https://dev.to/codeofrelevancy/what-is-prop-drilling-in-react-3kol) occurs when you pass data through many layers of components. This leads to complex and difficult-to-maintain code. By maintaining *hook proximity*, you can localize any side effects, and reduce the need to pass props through the component tree. As a practical example, we'll create a simple application where two child components need to fetch data from different endpoints. We'll define a simple `useFetch` abstraction and demonstrate the different patterns. ```jsx const useFetch = (url) => { const [data, setData] = useState(null); async function fetchData() { const response = await fetch(url); const data = await response.json(); setData(data); } return [data, fetchData]; }; ``` ### Without Hook Proximity ```jsx import React, { useState, useEffect } from 'react'; import { useFetch } from './fetch'; export function ParentComponent() { const [data1, fetchData1] = useFetch('https://api.example.com/data1'); const [data2, fetchData2] = useFetch('https://api.example.com/data2'); return ( <> <ChildComponent1 data={data1} fetchData={fetchData1} /> <ChildComponent2 data={data2} fetchData={fetchData2} /> </> ); }; function ChildComponent1({ data, fetchData }) { return ( <div> <button onClick={fetchData}>Fetch Data 1</button> {data && <div>{JSON.stringify(data)}</div>} </div> ); } function ChildComponent2({ data, fetchData }) { return ( <div> <button onClick={fetchData}>Fetch Data 2</button> {data && <div>{JSON.stringify(data)}</div>} </div> ); } ``` ### With Hook Proximity ```jsx import React, { useState, useEffect } from 'react'; import { useFetch } from './fetch'; export function ParentComponent() { return ( <> <ChildComponent1 /> <ChildComponent2 /> </> ); }; function ChildComponent1() { const [data, fetchData] = useFetch('https://api.example.com/data1'); return ( <div> <button onClick={fetchData}>Fetch Data 1</button> {data && <div>{JSON.stringify(data)}</div>} </div> ); }; function ChildComponent2() { const [data, fetchData] = useFetch('https://api.example.com/data2'); return ( <div> <button onClick={fetchData}>Fetch Data 2</button> {data && <div>{JSON.stringify(data)}</div>} </div> ); }; ``` ## Explanation In the first example, the `ParentComponent` manages the data fetching for both `ChildComponent1` and `ChildComponent2`, and the child components rely on props to trigger the data fetch and receive the data. This approach requires prop drilling, making the code harder to manage as the number of components grows. In the second example, we use hook proximity by moving the `useFetch` hook directly into the child components. This eliminates the need for prop drilling, simplifies the component hierarchy, and makes the side effects more localized and easier to reason about. ## Conclusion Avoiding prop drilling by using hook proximity helps create more maintainable and understandable React components. By keeping state and side effects close to where they are needed, you can reduce complexity and improve the clarity of your code.
clschnei
1,875,447
Creating Shell Extensions in .NET 8 with SharpShell
In this blog, I'll walk you through creating a Windows shell extension using the SharpShell library...
0
2024-06-03T13:51:16
https://dev.to/issamboutissante/creating-shell-extensions-in-net-8-with-sharpshell-2ioe
sharpshell, windowsextentions, dotnet, shellextentions
In this blog, I'll walk you through creating a Windows shell extension using the SharpShell library in .NET 8. SharpShell is a popular library for creating shell extensions, but it was traditionally used with the .NET Framework. Thanks to the Microsoft Windows Compatibility Pack, it's now possible to create shell extensions in .NET 8. ## Prerequisites - Visual Studio 2022 or later - .NET 8 SDK - Basic knowledge of C# and Windows shell extensions ## Step 1: Create a .NET 8 Class Library First, we need to create a .NET 8 class library. Open Visual Studio and create a new project: 1. Select **Class Library**. 2. Name the project `SharpShellNet8Demo`. 3. Choose **.NET 8.0** as the target framework. ## Step 2: Modify the .csproj File Next, we need to download and include the necessary packages: `Microsoft.Windows.Compatibility` and `SharpShell`. Modify the project file (`.csproj`) to target only Windows and include these packages: ```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net8.0-windows</TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> <UseWindowsForms>true</UseWindowsForms> <EnableComHosting>true</EnableComHosting> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.Windows.Compatibility" Version="8.0.4" /> <PackageReference Include="SharpShell" Version="2.7.2" /> </ItemGroup> </Project> ``` **Why we use these properties:** - **`<UseWindowsForms>true</UseWindowsForms>`**: Enables Windows Forms support in the project. - **`<EnableComHosting>true</EnableComHosting>`**: Enables COM hosting support, necessary for registering shell extensions. ## Step 3: Create a Context Menu Extension Next, we'll create a simple context menu extension. Add a new class to your project named `SimpleContextMenu.cs` and implement the following code: ```csharp using SharpShell.Attributes; using SharpShell.SharpContextMenu; using System; using System.Linq; using System.Runtime.InteropServices; using System.Windows.Forms; namespace SharpShellNet8Demo { [ComVisible(true)] [ClassInterface(ClassInterfaceType.None)] [COMServerAssociation(AssociationType.AllFilesAndFolders)] [COMServerAssociation(AssociationType.Directory)] [COMServerAssociation(AssociationType.DirectoryBackground)] [Guid("B3B9C5B6-5C92-4D1B-85E6-2F80B72F6E28")] public class SimpleContextMenu : SharpContextMenu { protected override bool CanShowMenu() => true; protected override ContextMenuStrip CreateMenu() { var menu = new ContextMenuStrip(); var mainItem = new ToolStripMenuItem { Text = "Simple Extension Action" }; var actionItem = new ToolStripMenuItem { Text = "Perform Action" }; actionItem.Click += (sender, e) => MessageBox.Show("Action performed!"); mainItem.DropDownItems.Add(actionItem); menu.Items.Add(mainItem); return menu; } [ComRegisterFunction] public static void Register(Type t) { RegisterContextMenu(t, @"*\shellex\ContextMenuHandlers\"); RegisterContextMenu(t, @"Directory\shellex\ContextMenuHandlers\"); RegisterContextMenu(t, @"Directory\Background\shellex\ContextMenuHandlers\"); } private static void RegisterContextMenu(Type t, string basePath) { string keyPath = basePath + "SimpleContextMenu"; using (var key = Microsoft.Win32.Registry.ClassesRoot.CreateSubKey(keyPath)) { if (key == null) { Console.WriteLine($"Failed to create registry key: {keyPath}"); } else { key.SetValue(null, t.GUID.ToString("B")); } } } [ComUnregisterFunction] public static void Unregister(Type t) { UnregisterContextMenu(@"*\shellex\ContextMenuHandlers\"); UnregisterContextMenu(@"Directory\shellex\ContextMenuHandlers\"); UnregisterContextMenu(@"Directory\Background\shellex\ContextMenuHandlers\"); } private static void UnregisterContextMenu(string basePath) { string keyPath = basePath + "SimpleContextMenu"; try { Microsoft.Win32.Registry.ClassesRoot.DeleteSubKeyTree(keyPath, false); } catch (Exception ex) { Console.WriteLine($"Error unregistering COM server from {keyPath}: {ex.Message}"); } } } } ``` ### Explanation of Attributes - **`[ComVisible(true)]`**: Makes the class visible to COM components. - **`[ClassInterface(ClassInterfaceType.None)]`**: Specifies that no class interface is generated for the class. - **`[COMServerAssociation(AssociationType.AllFilesAndFolders)]`**: Registers the shell extension for all files and folders. - **`[COMServerAssociation(AssociationType.Directory)]`**: Registers the shell extension for directories. - **`[COMServerAssociation(AssociationType.DirectoryBackground)]`**: Registers the shell extension for the background of directories. - **`[Guid("B3B9C5B6-5C92-4D1B-85E6-2F80B72F6E28")]`**: Assigns a unique identifier to the class. Generate this GUID using **Tools > Create GUID** in Visual Studio. ## Step 4: Register and Unregister the Extension To test our shell extension, we need to register and unregister it. Create two batch files, `Register.bat` and `Unregister.bat`, to handle this. ### Register.bat ```batch @echo off :: Check for administrative privileges net session >nul 2>&1 if %errorlevel% == 0 ( echo Running with administrative privileges ) else ( echo Requesting administrative privileges... goto UACPrompt ) goto Start :UACPrompt echo Set UAC = CreateObject^("Shell.Application"^) > "%temp%\getadmin.vbs" echo UAC.ShellExecute "%~s0", "", "", "runas", 1 >> "%temp%\getadmin.vbs" "%temp%\getadmin.vbs" exit /B :Start cd /d "%~dp0" :: Find DLL files ending with comhost.dll for /f "delims=" %%a in ('dir /b *comhost.dll') do set "DLLFile=%%a" if defined DLLFile goto Found echo No DLL file ending with comhost.dll found, please enter the DLL name: SET /P DLLName=Enter the DLL name to register (include .dll extension): set DLLFile=%DLLName% :Found if not exist "%DLLFile%" ( echo The specified DLL file does not exist. pause exit /b ) regsvr32 /s "%DLLFile%" taskkill /f /im explorer.exe start explorer.exe echo %DLLFile% registered and Explorer restarted. pause ``` ### Unregister.bat ```batch @echo off :: Check for administrative privileges net session >nul 2>&1 if %errorlevel% == 0 ( echo Running with administrative privileges ) else ( echo Requesting administrative privileges... goto UACPrompt ) goto Start :UACPrompt echo Set UAC = CreateObject^("Shell.Application"^) > "%temp%\getadmin.vbs" echo UAC.ShellExecute "%~s0", "", "", "runas", 1 >> "%temp%\getadmin.vbs" "%temp%\getadmin.vbs" exit /B :Start cd /d "%~dp0" :: Find DLL files ending with comhost.dll for /f "delims=" %%a in ('dir /b *comhost.dll') do set "DLLFile=%%a" if defined DLLFile goto Found echo No DLL file ending with comhost.dll found, please enter the DLL name: SET /P DLLName=Enter the DLL name to unregister (include .dll extension): set DLLFile=%DLLName% :Found if not exist "%DLLFile%" ( echo The specified DLL file does not exist. pause exit /b ) regsvr32 /u /s "%DLLFile%" taskkill /f /im explorer.exe start explorer.exe echo %DLLFile% unregistered and Explorer restarted. pause ``` ## Step 5: Build and Register the Extension 1. Build your project in Visual Studio. 2. Navigate to the output directory (usually `bin\Debug\net8.0-windows`). 3. Run `Register.bat` as an administrator to register the shell extension. 4. Right-click on any file or folder to see the new context menu option "Simple Extension Action". ## Step 6: Unregister the Extension When you want to unregister the shell extension, run `Unregister.bat` as an administrator. If you want to make any modifications to the project, you should unregister it first to avoid build errors caused by the DLL being in use. To prevent this issue, consider registering a copy of the `net8.0-windows` output, not the original `net8.0-w indows` directory. ## Conclusion Creating a shell extension in .NET 8 using SharpShell and the Microsoft Windows Compatibility Pack is straightforward. This tutorial covered setting up your project, writing the shell extension code, and handling registration and unregistration. Experiment with different types of extensions to enhance your Windows Explorer experience! Feel free to reach out if you have any questions or run into issues. Happy coding!
issamboutissante
1,875,446
React Native - 15 Core Components
React Native is a powerful and popular framework developed by Facebook that allows developers to...
0
2024-06-03T13:50:21
https://dev.to/himanshuaggar/react-native-15-core-components-2ifg
javascript, mobile, reactnative, react
**React Native** is a powerful and popular framework developed by Facebook that allows developers to **build mobile applications using JavaScript and React**. One of the key advantages of React Native is its ability to create truly native applications for both iOS and Android using a single codebase. This is achieved through a set of core components provided by the framework, which bridge the gap between web development and native app development. These core components are designed to be flexible and efficient, enabling developers to create a wide variety of mobile applications with a native look and feel. In this detailed overview, we will explore 15 of the most important core components in React Native, highlighting their functionalities, use cases, and examples to master mobile app development with React Native. ## 1. View The `View` component in React Native is a fundamental container component that supports various layout styles. It is the equivalent of a `div` element in HTML and can be used to create and style containers for various elements. Here’s an example of how to use the `View` component: ``` import React from 'react'; import { StyleSheet, View, Text } from 'react-native'; function App() { return ( <View style={styles.container}> <Text>Hello, World!</Text> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#fffff', alignItems: 'center', justifyContent: 'center', }, }); export default App; ``` ## 2. Text Component The `Text` component is a basic element in React Native used to display text content on the screen. While it has some basic styling properties, you usually nest it inside other components (e.g., `View`) to create more complex UIs. ``` import React from 'react'; import { Text, StyleSheet } from 'react-native'; const SimpleText = () => { return ( <Text style={styles.text} numberOfLines={2} onPress={() => alert('Hello')}> This is an example of a Text component in React Native. Tap on me! </Text> ); }; const styles = StyleSheet.create({ text: { fontSize: 16, color: 'blue', textAlign: 'center', margin: 10, fontFamily: 'Arial', }, }); export default SimpleText; ``` ## 3. Text Input `TextInput` is a core component in React Native that allows the user to enter text. It is commonly used to collect user data, like emails or passwords. You can customize the appearance of `TextInput` by using various props such as `placeholder`, `multiline`, `maxLength`, and more. Here’s a basic example of using `TextInput`: ``` import React, { useState } from 'react'; import { TextInput, View, Button } from 'react-native'; const App = () => { const [text, setText] = useState(''); const handleSubmit = () => { alert('You entered: ' + text); }; return ( <View> <TextInput style={{ height: 40, borderColor: 'gray', borderWidth: 1 }} onChangeText={text => setText(text)} value={text} placeholder="Enter text here" /> <Button onPress={handleSubmit} title="Submit" /> </View> ); }; ``` ## 4. Button A `Button` is a built-in React Native component used to create clickable buttons. It is a simple, customizable and easy-to-use component that captures touches and triggers an `onPress` event when pressed. Here’s a simple example of how to create a `Button` in React Native: ``` import React from 'react'; import { Button } from 'react-native'; const MyButton = () => { const onPressHandler = () => { alert('Button Pressed'); }; return ( <Button title="Click me" color="#841584" onPress={onPressHandler} /> ); }; export default MyButton; ``` ## 5. Image The `Image` component is used to display images in a React Native application. It allows you to load and display local as well as remote images, providing essential props and methods for better image handling and customization. To use the `Image` component, you need to import it from ‘react-native’: ``` import { Image } from 'react-native'; ``` To display a local image in the application, you have to require it in the source prop of the `Image` component. Place the image file in your project directory and use the following syntax: ``` <Image source={require('./path/to/your/image.png')} /> ``` To display a remote image from a URL, you need to set the source prop with a `uri` object: ``` <Image source={{ uri: 'https://path/to/your/remote/image.png' }} /> ``` Keep in mind that you need to define the dimensions (width and height) when using remote images: ``` <Image source={{ uri: 'https://path/to/remote/image.png' }} style={{ width: 200, height: 200 }} /> ``` ## 6. SafeAreaView `SafeAreaView` is a React Native core component that helps to adjust your app’s UI elements and layout to accommodate the notches, curved edges, or home indicator on iOS devices. It ensures that content is rendered within the visible portion of the screen. Here is an example of using `SafeAreaView` in the code: ``` import React from 'react'; import { SafeAreaView, StyleSheet, Text } from 'react-native'; const App = () => { return ( <SafeAreaView style={styles.container}> <Text>Hello World!</Text> </SafeAreaView> ); }; const styles = StyleSheet.create({ container: { flex: 1, }, }); export default App; ``` ## 7. Scroll View In React Native, the `ScrollView` is a generic scrolling container used to provide a scrollable view to its child components. It is useful when you need to display scrollable content larger than the screen, such as lists, images, or text. A `ScrollView` must have a bounded height in order to properly work. Here’s a simple example of how to use the `ScrollView` component in your React Native app: ``` import React from 'react'; import { ScrollView, Text } from 'react-native'; const MyScrollView = () => { return ( <ScrollView> <Text>Item 1</Text> <Text>Item 2</Text> <Text>Item 3</Text> <Text>Item 4</Text> <Text>Item 5</Text> <Text>Item 6</Text> </ScrollView> ); } export default MyScrollView; ``` ## 8. FlatList `FlatList` is a React Native core component that displays a scrolling list of changing, but similarly structured, data. It is an efficient list component that makes use of a limited scrolling `renderWindow`, reducing the memory footprint and creating smooth scrolling. Additionally, `FlatList` supports-Headers, Footers, Pull-to-refresh, and Horizontal scrolling, among other things. Here is a basic example demonstrating how to use the `FlatList` component: ``` import React from 'react'; import { FlatList, View, Text } from 'react-native'; const data = [ { id: '1', content: 'Item 1' }, { id: '2', content: 'Item 2' }, { id: '3', content: 'Item 3' }, // ... ]; const renderItem = ({ item }) => ( <View> <Text>{item.content}</Text> </View> ); const MyFlatList = () => ( <FlatList data={data} renderItem={renderItem} keyExtractor={item => item.id} /> ); export default MyFlatList; ``` ## 9. Switch A `Switch` is a core component in React Native used to implement a “toggle” or “on-off” input. It provides a UI for the user to switch between two different states, typically true or false. The primary use case is to enable or disable a feature or setting within an application. `Switch` component has a boolean `value` prop (true for on, false for off) and an `onValueChange` event handler, which is triggered whenever the user toggles the switch. Here’s an example of how to use `Switch` in a React Native application: ``` import React, {useState} from 'react'; import {View, Switch, Text} from 'react-native'; const App = () => { const [isEnabled, setIsEnabled] = useState(false); const toggleSwitch = () => setIsEnabled(previousState => !previousState); return ( <View> <Text>Enable Feature:</Text> <Switch trackColor={{ false: "#767577", true: "#81b0ff" }} thumbColor={isEnabled ? "#f5dd4b" : "#f4f3f4"} onValueChange={toggleSwitch} value={isEnabled} /> </View> ); }; export default App; ``` ## 10. Modal A `Modal` is a component that displays content on top of the current view, creating an overlay that can be used for various purposes, such as displaying additional information, confirmation messages, or a selection menu. ``` import React, {useState} from 'react'; import {Modal, Text, TouchableHighlight, View, Alert} from 'react-native'; const App = () => { const [modalVisible, setModalVisible] = useState(false); return ( <View> <Modal animationType="slide" transparent={true} visible={modalVisible} onRequestClose={() => { Alert.alert('Modal has been closed.'); setModalVisible(!modalVisible); }}> <View> <View> <Text>Hello, I am a Modal!</Text> <TouchableHighlight onPress={() => { setModalVisible(!modalVisible); }}> <Text>Hide Modal</Text> </TouchableHighlight> </View> </View> </Modal> <TouchableHighlight onPress={() => { setModalVisible(true); }}> <Text>Show Modal</Text> </TouchableHighlight> </View> ); }; export default App; ``` ## 11. Pressable `Pressable` is a core component in React Native that makes any view respond properly to touch or press events. It provides a wide range of event handlers for managing user interactions, such as `onPress`, `onPressIn`, `onPressOut`, and `onLongPress`. With Pressable, you can create custom buttons, cards, or any touchable elements within your app. ``` import React from 'react'; import { Pressable, Text, StyleSheet } from 'react-native'; export default function CustomButton() { return ( <Pressable onPress={() => console.log('Pressed!')} style={({ pressed }) => [ styles.button, pressed ? styles.pressedButton : styles.normalButton, ]} > <Text style={styles.buttonText}>Press me</Text> </Pressable> ); } const styles = StyleSheet.create({ button: { padding: 10, borderRadius: 5, }, normalButton: { backgroundColor: 'blue', }, pressedButton: { backgroundColor: 'darkblue', }, buttonText: { color: 'white', textAlign: 'center', }, }); ``` ## 12. TouchableOpacity In React Native, Touchable components are used to handle user interactions like taps, long presses, and double-taps on the appropriate elements. `TouchableOpacity` is The opacity of the wrapped view is decreased when it’s active. ``` import { TouchableOpacity } from 'react-native'; <TouchableOpacity onPress={() => { alert('Tapped!'); }} > <Text>Tap me</Text> </TouchableOpacity> ``` ## 13. Activity Indicator The `ActivityIndicator` is a core component in React Native that provides a simple visual indication of some ongoing activity or loading state within your application. It shows a spinning animation, which gives the user feedback that something is happening in the background. This component is particularly useful when fetching data from an external source, like a server, or while performing time-consuming operations. To use the ActivityIndicator component, simply import it from ‘react-native’, and add it to your component tree. You can customize the appearance and behavior of the `ActivityIndicator` by providing various optional props, such as animating, color, and size. Below is an example of how to use the `ActivityIndicator` component within a functional React component: ``` import React from 'react'; import { ActivityIndicator, View, Text } from 'react-native'; const LoadingScreen = () => ( <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}> <Text>Loading, please wait...</Text> <ActivityIndicator size="large" color="#0000ff" /> </View> ); export default LoadingScreen; ``` ## 14. StatusBar The `StatusBar` component is used to control the appearance of the status bar on the top of the screen. It may strike as a bit unusual since, unlike other React Native components, it doesn’t render any visible content. Instead, it sets some native properties that can help customize the look of status bars on Android, iOS, or other platforms. To use the `StatusBar` component, you need to import it from ‘react-native’ and use it in your React component. Here’s an example: ``` import React from 'react'; import { View, StatusBar } from 'react-native'; const App = () => { return ( <View> <StatusBar barStyle="dark-content" backgroundColor="#F0F0F0" /> {/* Your other components */} </View> ); }; export default App; ``` ## 15. SectionList `SectionList` is a component used to render sections and headers in a scroll view. It helps to manage and optimize a large list of items divided into categories. It is one of the List `View` components provided by React Native along with `FlatList`. The key difference between `SectionList` and `FlatList` is that `SectionList` separates data items into sections, with headers. Here’s an example of how to use a `SectionList` in your app: ``` import React, {Component} from 'react'; import {SectionList, StyleSheet, Text, View, SafeAreaView} from 'react-native'; export default class App extends Component { render() { return ( <SafeAreaView style={styles.container}> <SectionList sections={[ { title: 'Section 1', data: ['Item 1.1', 'Item 1.2', 'Item 1.3'], }, { title: 'Section 2', data: ['Item 2.1', 'Item 2.2', 'Item 2.3'], }, ]} renderItem={({item}) => <Text style={styles.item}>{item}</Text>} renderSectionHeader={({section}) => ( <Text style={styles.sectionHeader}>{section.title}</Text> )} keyExtractor={(item, index) => String(index)} /> </SafeAreaView> ); } } const styles = StyleSheet.create({ container: { flex: 1, paddingRight: 10, paddingLeft: 10, }, sectionHeader: { fontSize: 20, fontWeight: 'bold', backgroundColor: 'lightgrey', padding: 5, }, item: { fontSize: 16, padding: 10, borderBottomWidth: 1, borderBottomColor: 'black', }, }); ```
himanshuaggar
1,875,445
Top 5 Tailwind CSS Interview Questions and Answers
If you’re a fresher aiming to land a web development role and Tailwind CSS is on the tech stack, this...
0
2024-06-03T13:48:52
https://dev.to/lalyadav/top-5-tailwind-css-interview-questions-and-answers-4do6
tailwindcss, css, cssframework, react
If you’re a fresher aiming to land a web development role and **[Tailwind CSS](https://www.onlineinterviewquestions.com/tailwind-css-interview-questions/)** is on the tech stack, this guide will equip you with the top 5 interview questions and answers. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a32uslkvgkcqceyergsd.png) Q1. What is Tailwind CSS? Ans: Tailwind CSS is a utility-first CSS framework that provides pre-built utility classes to style your HTML elements directly. Unlike traditional frameworks with pre-defined components, Tailwind gives you granular control over the look and feel of your UI. Q2. Why use Tailwind CSS? Ans: Here are some key benefits: Rapid Development: Quickly style your UI with pre-built classes, reducing development time. Highly Customizable: Tailwind offers extensive customization options through its configuration file. Responsive Design: Built-in responsive variants allow for easy adaptation across devices. Reduced CSS Size: Only the classes you use are included in the final CSS file. Q3. How do you integrate Tailwind CSS into a project? Ans: There are two main ways: Using a CDN: Include the Tailwind CSS library directly from a CDN link in your HTML. Installing with npm or yarn: Install Tailwind CSS as a package and configure it using a tailwind.config.js file. Q4. How do you apply text alignment (left, center, right) in Tailwind CSS? Ans: Use classes like text-left, text-center, and text-right to align text horizontally within an element. Q5. Explain how to create responsive margins and paddings in Tailwind CSS. Ans: Tailwind provides responsive variants like sm, md, lg, and xl alongside margin and padding utility classes. For example, mr-4 adds a 1rem margin-right on all screens, while mr-sm-2 adds a 0.5rem margin-right only on small screens.
lalyadav
1,875,443
Creative Article Grid Using Flex
Explore this example of a responsive three-column article grid layout created using CSS Flexbox. This...
0
2024-06-03T13:48:28
https://dev.to/creative_salahu/creative-article-grid-using-flex-27io
codepen
Explore this example of a responsive three-column article grid layout created using CSS Flexbox. This design ensures optimal presentation across various devices, from desktops to tablets and mobile phones. The grid is styled with dynamic hover effects and a clean, modern aesthetic, perfect for showcasing articles or products in an elegant manner. {% codepen https://codepen.io/CreativeSalahu/pen/YzbVwXy %}
creative_salahu
1,875,437
Which is good for making scrappers in Golang?
I'm thinking of making a web scrapper again and I heard there is a new Rod named Library, should I...
0
2024-06-03T13:38:42
https://dev.to/7h33mp7ym4n/which-is-good-for-making-scrappers-in-golang-4118
discuss, go, webdev, beginners
I'm thinking of making a web scrapper again and I heard there is a new Rod named Library, should I give it a try or should I stick with the OG colly?
7h33mp7ym4n
1,875,436
BSides312 2024: Insights and Innovations in Cybersecurity
The 312 area code is one of the original 86 area codes created by Bell System in 1947. In those early...
0
2024-06-03T13:38:00
https://blog.gitguardian.com/bsides312-2024/
security, cybersecurity, ai, llm
The [312 area code](https://en.wikipedia.org/wiki/Area_code_312?ref=blog.gitguardian.com) is one of the original 86 area codes created by Bell System in 1947. In those early days of telecom, when the entire system could handle less than 100 exchanges, 312 supported the growing population of America's second-largest city and surrounding areas, connecting this diverse community together and letting them communicate in a brand new way. This same spirit of coming together and sharing our knowledge about ever-evolving challenges was present at the [first-ever BSides312 in Chicago](https://bsides312.org/?ref=blog.gitguardian.com).  Saturday morning, May 11th, 2024, brought together 247 attendees from a wide variety of backgrounds, including long-time security professionals, students, and developers keen to learn more about security. Throughout the day, 10 speakers shared their knowledge from the stage, and many hallway conversations happened throughout the venue, [The Bottom Lounge](https://bottomlounge.com/bottom-lounge-for-private-events/?ref=blog.gitguardian.com), a popular music venue that was gracious enough to host the event. At the core of the event was a pervasive theme of community and connecting as human beings. This was true from our keynote, "Securing Sexuality: Rewiring Our Intimate Connections," by [Dr. Stefani Georlich](https://www.linkedin.com/in/sgoerlich/?ref=blog.gitguardian.com), who reminded us all that behind every data point is a human being whose privacy and security matter deeply, all the way through the closing session from legendary hacker [Chris Roberts](https://www.linkedin.com/in/sidragon1/?ref=blog.gitguardian.com) who talked about the need for inter-team collaboration in his talk "Evolution of threat intelligence, tracking your boss for fun, profit, and protection." [![](https://lh7-us.googleusercontent.com/u5IT0BySs9G22716dnZC4kVdAakJbNH8mr7oLgskK0yCefCF0HIjJX4f9U2rj9ZA1Skg1f9PY-AiYQFDeL9LJPPkt1WrDRaHFXdK_U_cz1IN7sRm9gDtohViFMwvIFBU8WHBmMgsEtMt5gA5-GBVZQ)](https://www.linkedin.com/posts/dwaynemcdaniel_bsides312-activity-7195078023451217921--D4p?ref=blog.gitguardian.com) Dr. Stefani Georlich Here are just a few of the other highlights from this incredible day.  You can't gauge risk without understanding probabilities -------------------------------------------------------- When you first try to quantify security risks, it can seem daunting. With so many new threats emerging daily and the likelihood of any event being nearly impossible to predict, it might seem like a hopeless pursuit. But it turns out we have already been doing precisely this kind of risk assessment for a long time in the realm of insuring human beings, who are very risk-prone and unpredictable.  In her talk "Educating Your Guesses: How To Quantify Risk and Uncertainty," [Sara Anstey, Director of Data Analytics and Risk at Novacoast](https://www.linkedin.com/in/sara-anstey/?ref=blog.gitguardian.com), explained that most current models of cybersecurity risk are not properly taking into account [range compression](https://www.youtube.com/watch?v=Ud9dZgD8NRM&ref=blog.gitguardian.com) and are making decisions with not enough information. Essentially, if we have two threats, we are going to judge them against one another rather than stepping back and looking at the larger context. Sara urged us to embrace uncertainty and utilize Monte Carlo simulations to better estimate potential outcomes. By defining risk, setting time ranges, and assigning values with confidence intervals, she demonstrated how to produce more reliable risk assessments. Anstey's session reminded us that while we can't predict the future, we can improve our estimations and make more informed decisions. Run those simulations! [![](https://lh7-us.googleusercontent.com/f6ci7qw4m_3icY6jFonoO0P4_Pzf9zn1VXTJ2ckQXecoO0gvDwKSN58DZm7WWK1KJ6Pw9SgUVQkIInKOZ1A_nxuT84PixEW4GM7WjqARhKdesPgHnl5jytwNzKQIkQIc-IivZDwtqKIbYoJJsEAPaw)](https://www.linkedin.com/posts/dwaynemcdaniel_bsides312-activity-7195101561449037825-_I0H?ref=blog.gitguardian.com) Educating Your Guesses: How To Quantify Risk and Uncertainty from Sara Anstey You can learn a lot about security by fighting actual fires ----------------------------------------------------------- In her session "Dumpster Fires: 3 Things About IR I Learned by Being a Firefighter," [Dr. Catherine J. Ullman, Principal Technology Architect, Security at the University at Buffalo and firefighter](https://www.linkedin.com/in/catherine-ullman-26a9406/?ref=blog.gitguardian.com), showed that incident response (IR) in cybersecurity and firefighting have a lot of similar goals. Both require adequate preparation and proper mitigation steps to be in place. During a fire or an incident, we must be careful not to cause more harm to ourselves or others as we work. After either type of event, we must take into account a cleanup and recovery phase. Catherine shared a few personal stories, such as a close call involving an air tank and some melted hoses, which she overlooked due to complacency. She said it taught her how critical vigilance and proper training are, as her other training and preparedness are the only reasons she did not get seriously hurt. Every incident is unique and requires a tailored approach, and during any incident response, we must show patience and avoid tunnel vision. No matter what type of incident you are responding to, it is very important to take a few moments to properly assess and think through what to do next in order to get the best outcome. [![](https://lh7-us.googleusercontent.com/8PW339RooJjVsZqFK1caHEeMZKK4zZEVAUcaaAAU-MB-CwrXjhwpvaQ9hfuopN-9RsVtic22Z3SZWeF7R6PlKjJ96odHfctBTp2vZhGUT4wDJOtWiSRUhYRCuPYsS8n1gWOvm96fBmNgn17JnVHGqA)](https://www.linkedin.com/posts/dwaynemcdaniel_bsides312-activity-7195122309936865281-XnUL?ref=blog.gitguardian.com) Dumpster Fires: 3 things about IR I learned by being a firefighter, from Dr. Catherine J. Ullman Security training must be inclusive of everyone, especially the most vulnerable ------------------------------------------------------------------------------- The "Senior Citizens Fighting Scammers" session by [National Security Research Scientist Anita Nikolich](https://ischool.illinois.edu/people/anita-nikolich?ref=blog.gitguardian.com) introduced the [DART Collective](https://dartcollective.net/?ref=blog.gitguardian.com), a [U,S. National Science Foundation-funded](https://www.nsf.gov/?ref=blog.gitguardian.com) project aimed at protecting senior citizens from scams. While our seniors are often the target of increasingly advanced criminals who prey on their lack of technical prowess, there are few engaging training paths that have proven effective. Her research into how folks who did not grow up immersed in tech can become more cybersecurity security-minded led to the creation of a free mobile game called [DeepCover](https://dartcollective.net/deepcover/?ref=blog.gitguardian.com). The DART Collective is working to combine cybersecurity expertise, game design, and social media campaigns to offer an engaging and accessible online training portal. Anita said we must make cybersecurity education appealing without resorting to "chocolate-covered broccoli" approaches. They interviewed many seniors and took into account what they look for in an app and how they like to learn. That all shows up in DeepCover.  Next, they are creating an interactive online portal where seniors can play solo or in a group to solve challenges that prepare them for scammers. The social and competitive aspects should not be overlooked, as this provides an incentive to keep playing. As they seek to expand, the initiative is working with various community centers, including museums, churches, and senior centers, aiming to empower older adults to recognize and combat scams. [![](https://lh7-us.googleusercontent.com/nMFglf5DFCvqI0_DdI1bxCUPeI_7BqEsTmCkTtJusCEZPyHeOY6lhwMBEnkPt4DO8uzZCWRq7DErP4VCD3P5X_m5frPuwXhTDm2F2MHE19q45O-czc9AkLChP3kX6kB_oc_nESqN1ZBmaCqXbQxy-Q)](https://www.linkedin.com/posts/dwaynemcdaniel_bsides312-activity-7195145831358898176-LLm_?ref=blog.gitguardian.com) Senior Citizens Fighting Scammers by Anita Nikolich A Chicago community working to keep us all a little safer --------------------------------------------------------- Each session brought a new perspective on how we can empower the people in our organizations to embrace better and safer practices. Your author gave a talk about Security Champions programs, such as those from the [WeHackPurple community](https://wehackpurple.com/building-security-champions/?ref=blog.gitguardian.com) and [OWASP](https://owasp.org/www-project-security-champions-guidebook/?ref=blog.gitguardian.com). One asset that many security teams overlook is the ability to tap into their vendors and partners for educational content. At GitGuardian, we are always happy to help spread awareness of the problem of secrets sprawl and would be glad to help your team via online materials and training as well. Feel free to [reach out to learn more about this](https://www.gitguardian.com/contact-us?ref=blog.gitguardian.com).   [![](https://lh7-us.googleusercontent.com/i8eqiQCJshJO06u60g17M5pzq8hf_5igKDSupztNS-MEk4YCuYfmDh8dX4TFY4ISZrqxbIjpS2eSDmr4W_6Q2tEZIjyhmgpOZmlpNgmYbC_zsbOPmQuSZjQgymoBkqoCEjV67WtWxuHdoET9XaS6Uw)](https://twitter.com/popcornhax/status/1789351573681787272?ref=blog.gitguardian.com) Dwayne McDaniel presenting Championing Security at BSides312 As a community-organized event, it would not have been possible without all the amazing volunteers, including the organizers, who devoted a lot of time to making it happen. I want to say a huge thank you to them for helping make Chicago a little more secure. The inaugural BSides312 conference was a resounding success, and I can't wait until we report about It in 2025!
dwayne_mcdaniel
1,874,839
ViBox Can Transform Your Desktop Experience
ViBox enables users to personalize their desktops by adding a diverse range of content, including...
0
2024-06-03T13:37:12
https://dev.to/tufik2/vibox-can-transform-your-desktop-experience-big
productivity, development, electron, news
ViBox enables users to personalize their desktops by adding a diverse range of content, including widgets, bubbles, or icons directly to their status bar. Users have the flexibility to select from a collection of pre-defined widgets or utilize the browser widget to incorporate custom web content. Whether it’s news feeds, social media, notes, or personalized web snippets, ViBox makes it easy for users to access their favorite online information with just a single click. Our commitment extends to continually expanding the widget library, providing even more options and enhancing the existing ones. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8uojqt2b6bexoi5orhws.png) ## The Start This project began as a small personal need. Frustrated by the lack of suitable existing options, I decided to create a small app to address my specific problem. Over time, I expanded its functionality by adding new features, resulting in what we now know as ViBox. This productivity tool has significantly enhanced my workflow as a developer. It allows me to keep my desktop organized, ensuring that my essential content and frequently used tools are readily accessible, instead of wasting time searching in hundreds of tabs, and bookmarks, or having my dock with hundreds of icons. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bej80e5ynxcombcsynav.png) Now, let me demonstrate how ViBox works by adding three different types of content to the available locations: desktop widget, bubble, or status bar icon. ## Desktop Widget In this scenario, we’ll add a straightforward weather widget to our desktop. I want to acknowledge weatherwidget.org for their fantastic widget generator, which we’re utilizing here. You can also visit their website, generate one of their available layouts, copy the generated code, and add it using our browser’s content option. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ewa33dx5tdlurkv0sanz.png) 1. Go to the main panel and choose the weather content. 2. In the configuration panel, provide a widget name, search for the desired location, and click **Add** button. 3. You can resize and drag our widget to any position on the screen. After resizing or dragging, the widget will automatically reposition itself. This behavior is due to the default grid settings, which help create visually appealing layouts. If necessary, you can configure the grid settings in the configuration/general section. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jbu3k4n4ixq3bkzp9lfa.png) ## Bubbles ViBox also allows us to add all content as floating bubbles. This type of content is perfect for adding resources that we use frequently in our daily routines. Like floating bubbles for emails, note widgets, video/audio sites, work chats, or any other web content that you visit frequently throughout the day. Let’s add Gmail as a bubble, using the quick shortcut already available in the main panel. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u8dbj2sehdmrrzmbblx9.png) 1. Search and click in the Gmail shortcut. 2. Click the gear icon located at the top-right corner of the widget. 3. Navigate to the Bubble Tab and enable it. 4. By default, corner attraction is enabled. After you drag and drop the widget for the first time, it will automatically reposition itself in a corner. 5. When you click the bubble, it will open the content. You can then resize and drag it to set the desired position or dimensions of the window. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oymhoszp0vo5pr6rv8vp.png) ## Status Bar Icons Bubbles are useful in various scenarios, but space could be limited to a bunch of those. An alternative approach is to incorporate content as icons in the status bar. some examples could be the exchange currency widget, chatGPT, translators, notes, etc. Bubbles are useful in various scenarios, but sometimes space constraints limit their effectiveness. An alternative approach is to incorporate content as icons in the status bar. For instance, consider adding widgets like the exchange currency widget, ChatGPT, translators, and notes. Having ChatGPT readily accessible with just one click would be especially beneficial. Let’s add ChatGPT, a widely popular tool these days. Having it just one click away would be incredibly convenient. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/huso0tbojbj4xikxprfy.png) 1. Select the Browser Content in the main panel. 2. Configure the widget by setting the widget name as ChatGPT and the URL as [https://chatgpt.com](https://chatgpt.com). 3. In the Status Bar tab, enable it and set the icon type to URL. 4. Quickly search for a good image that works well as an icon “preferably a PNG with transparency), copy the URL, and paste it into the content field. example: [//cdn-icons-png.flaticon.com/512/12222/12222589.png](https://cdn-icons-png.flaticon.com/512/12222/12222589.png) 5. To finish, click on the ‘Generate’ button. The app will automatically create an icon and display a preview of it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4epwhchtrq7be3bbmt78.png) ## Open App For All. ViBox is completely free. I invite you to join us on this exciting journey to shape the future of the app. Your contribution and feedback will help enhance and deliver a better experience. Enjoy it! I hope this app becomes a valuable productivity tool, just as it has been for me. To conclude, you can download our app from the [**App Store**](https://apps.apple.com/us/app/vibox/id6483685638) or [**Microsoft Store**](https://apps.microsoft.com/detail/9n052x4kr94s?hl=en-us&gl=US). Visit us at [**vibox.cloudbit.co**](https://vibox.cloudbit.co/), and feel free to comment, request new features, or just say hi on [**Discord**](https://discord.com/channels/1220065588513210448/1220065588513210451).
tufik2
1,875,435
I'm frustrated
I'm newbie. I want to learn programming but i don't have any idea where to start?? can anyone tell me...
0
2024-06-03T13:36:07
https://dev.to/theeng11/im-frustrated-n7a
I'm newbie. I want to learn programming but i don't have any idea where to start?? can anyone tell me about roadmap for front-end developer...
theeng11
1,875,433
How to Upload Images to Google Gemini for Next.js
Introduction Google Gemini exhibits strong performance in multi-model tasks, particularly...
27,713
2024-06-03T13:33:09
https://medium.com/@hantian.pang/how-to-upload-images-to-google-gemini-for-next-js-9555d6d70ce3
nextjs, llm, gemini, webdev
## Introduction Google Gemini exhibits strong performance in multi-model tasks, particularly the latest Gemini 1.5 Flash and Gemini 1.5 Pro. There are two benchmarks for multi-model tasks: reasoning and math. As demonstrated, the Gemini 1.5 Pro performs on par with the latest GPT-4o in visual math tasks 🎉. | Benchmark | Description | Gemini 1.5 Flash | Gemini 1.5 Pro | GPT-4o | | --- | --- | --- | --- | --- | | MMMU | Multi-discipline college-level reasoning problems | 56.1% | 62.2% | 69.1% | | MathVista | Mathematical reasoning in visual contexts | 58.4% | 63.9% | 63.8% | In this blog, I will guide you on how to unlock the vision capabilities of Google Gemini. Let's get started 🚀. ## Prerequisite In my latest [blog](https://dev.to/ppaanngggg/how-to-use-google-gemini-for-nextjs-with-streaming-output-352g), I demonstrated how to use Google Gemini with Next.js for streaming output. While the previous guide focused on text input, this article will show you how to upload images to Google Gemini, using a simple demo. If you're unfamiliar with registering a Google AI API Key or using the Vercel AI SDK, I recommend reading the previous [blog](https://dev.to/ppaanngggg/how-to-use-google-gemini-for-nextjs-with-streaming-output-352g) first. ## Server-Side Here is the complete server-side function. I made a few modifications, namely removing the custom `Message` and importing `CoreMessage` instead. ```tsx "use server"; import { google } from "@ai-sdk/google"; import { CoreMessage, LanguageModel, streamText } from "ai"; import { createStreamableValue } from "ai/rsc"; export async function continueConversation(history: CoreMessage[]) { "use server"; const stream = createStreamableValue(); const model = google.chat("models/gemini-1.5-pro-latest"); (async () => { const { textStream } = await streamText({ model: model, messages: history, }); for await (const text of textStream) { stream.update(text); } stream.done(); })().then(() => {}); return { messages: history, newMessage: stream.value, }; } ``` The `CoreMessage` is a complex structure that can accept various types of data. `CoreUserMessage` is a message sent by a user, it has a fixed role `user` and flexible `content`. The `UserContent` can either be a plain string, a `TextPart` object, or an `ImagePart` object. ```tsx type CoreUserMessage = { role: 'user'; content: UserContent; }; type UserContent = string | Array<TextPart$1 | ImagePart>; interface TextPart$1 { type: 'text'; text: string; } interface ImagePart { type: 'image'; /** Image data. Can either be: - data: a base64-encoded string, a Uint8Array, an ArrayBuffer, or a Buffer - URL: a URL that points to the image */ image: DataContent | URL; /** Optional mime type of the image. */ mimeType?: string; } ``` Delve deep into the `ImagePart`. You can pass either base64-encoded image data or an image URL into the image field. In this instance, to simplify the system, we will pass base64-encoded image data into the message. ## Client-Side This page requires key modifications. We need to upload an image, encode it into a base64 message, and preview the image within the message. The following are the complete codes for the page after the update. You can copy and paste this code, and I'll explain the key points afterward. ```tsx "use client"; import { useState } from "react"; import { continueConversation } from "./actions"; import { readStreamableValue } from "ai/rsc"; import { CoreMessage } from "ai"; export default function Home() { const [conversation, setConversation] = useState<CoreMessage[]>([]); const [imageInput, setImageInput] = useState<string>(""); const [textInput, setTextInput] = useState<string>(""); async function getBase64(file: File): Promise<string> { return new Promise((resolve) => { const reader = new FileReader(); reader.readAsDataURL(file); reader.onload = () => { resolve(reader.result as string); }; }); } return ( <div> <div> {conversation.map((message, index) => ( <div key={index}> {message.role}:{" "} { // if it's string, just show it, else if it is image, preview image, if it is text, show the text typeof message.content === "string" ? ( message.content ) : message.content[0].type === "image" ? ( <img alt="" src={ ("data:image;base64," + message.content[0].image) as string } width={640} /> ) : message.content[0].type === "text" ? ( message.content[0].text ) : ( "" ) } </div> ))} </div> <div> <input type="file" onChange={(event) => { if (event.target.files) { const file = event.target.files[0]; getBase64(file).then((result) => { setImageInput(result); }); } else { setImageInput(""); } }} /> <input type="text" value={textInput} onChange={(event) => { setTextInput(event.target.value); }} /> <button onClick={async () => { // append user messages const userMessages: CoreMessage[] = []; if (imageInput.length) { // remove data:*/*;base64 from result const pureBase64 = imageInput .toString() .replace(/^data:image\/\w+;base64,/, ""); userMessages.push({ role: "user", content: [{ type: "image", image: pureBase64 }], }); } if (textInput.length) { userMessages.push({ role: "user", content: [{ type: "text", text: textInput }], }); } const { messages, newMessage } = await continueConversation([ ...conversation, ...userMessages, ]); // collect assistant message let textContent = ""; for await (const delta of readStreamableValue(newMessage)) { textContent = `${textContent}${delta}`; setConversation([ ...messages, { role: "assistant", content: [{ type: "text", text: textContent }], }, ]); } }} > Send Message </button> </div> </div> ); } ``` 1. Due to the complexity of `CoreMessage`, I have added some conditional branches to handle message previews. This is particularly the case when using the `<img />` tag to display base64-encoded images. 2. Add another `<input>` with `type="file"` to upload an image. When a change occurs, read the image file and convert it into a base64 string. 3. Finally, when the send button is clicked, we need to convert the image and text inputs into an array of `CoreMessage`. Please note that the base64 header should be discarded from the image input. ## Body Size Config The default `bodySizeLimit` for `Next.js` is set to 1MB. If you wish to upload files larger than 1MB, you need to adjust the configuration as follows. ```tsx const nextConfig = { experimental: { serverActions: { bodySizeLimit: '10mb' } } }; ``` ## Let’s Test Now I upload the cover image from the previous blog and ask, "What is this picture about?" Then, I click the send button. ![input of image and text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s17lcnqgjic66tzh1v8v.png) Examine the assistant's output; it's quite impressive 👏👏👏. ![output of google gemini](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hq5u5l3o730mbz6bz8xv.png) ## References 1. Documentation for the AI SDK: https://sdk.vercel.ai/docs/introduction 2. Google AI Studio: https://ai.google.dev/aistudio
ppaanngggg
1,875,431
Episode 24/22: RxAngular, Analog
Jason Lengstorf and Brandon Roberts discussed and tried out Analog. Julian Jandl and Enea Jahollari...
0
2024-06-03T13:28:29
https://dev.to/this-is-angular/episode-2422-rxangular-analog-4gn
webdev, angular, javascript, programming
Jason Lengstorf and Brandon Roberts discussed and tried out Analog. Julian Jandl and Enea Jahollari wrote about the latest developments in RxAngular. {% embed https://youtu.be/DQ15st0j0bY %} ## RxAngular Zoneless is a ubiquitous topic, but it is not completely new. For many years, RxAngular, a community project, has allowed us to write zoneless applications with fine-grained change detection. Next to a state management library, RxAngular offers a template library, which provides directives designed for high performance and zonelessness. The third library is a CDK, where you can build your own directives using RxAngular-specific options. As the name says, RxAngular was built around Observables but has recently received support for Signals so that it doesn't diverge from Angular's current path. Julian Jandl and Enea Jahollari wrote an article explaining the changes and providing some contextual information. {% embed https://push-based.io/article/new-features-for-rxangular-native-signal-support-and-improved-state %} ## Analog Jason Lengstorf is a popular streamer. In his latest episode, he had Brandon Roberts as a guest and it was all about Analog, a meta-framework for Angular. The stream started with an essential topic, namely Why? Angular was not a perfect fit for static websites where SEO plays a crucial part. Now, with the last improvements in SSR, Angular is catching up, but SSR only takes you so far, and this is where Analog steps in. In addition to SSR, it has file-based routing, Markdown Templates, Server-Side data fetching, and the Analog format. One of the main ingredients is Vite. {% embed https://www.youtube.com/live/VSCXOTCJpiI %}
ng_news
1,875,429
VSCodeでデフォルトのターミナルをPowerShellからGit Bashに変更するには
VSCodeでデフォルトのターミナルをPowerShellからGit Bashに変更するには、以下の手順を行います。 VSCodeを開く VSCodeを起動します。 設定を開く メニューバーから...
0
2024-06-03T13:26:48
https://dev.to/aaattt/vscodededehuorutonotaminaruwopowershellkaragit-bashnibian-geng-suruniha-5hgg
VSCodeでデフォルトのターミナルをPowerShellからGit Bashに変更するには、以下の手順を行います。 1. **VSCodeを開く** VSCodeを起動します。 2. **設定を開く** メニューバーから `File` -> `Preferences` -> `Settings` を選択するか、ショートカットキー `Ctrl + ,` を使用して設定を開きます。 3. **設定を編集** 設定画面の右上にあるアイコンから `Open Settings (JSON)` を選択し、設定ファイル(settings.json)を開きます。 4. **settings.jsonに以下のコードを追加** ```json "terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe" ``` これは、Git Bashの実行ファイルのパスです。インストール場所が異なる場合は、そのパスを指定してください。 5. **変更を保存** ファイルを保存します。これでVSCodeのターミナルがGit Bashに変更されます。 また、設定ファイルを使わずにGUIからも変更できます: 1. **設定を開く** 同じく `File` -> `Preferences` -> `Settings` を選択します。 2. **検索バーに「terminal.integrated.shell.windows」と入力** 表示された設定項目に、Git Bashのパス `C:\\Program Files\\Git\\bin\\bash.exe` を入力します。 これで、次回ターミナルを開くとGit Bashが起動するようになります。 ## perplexity VSCodeでデフォルトのターミナルをPowerShellからGit Bashに変更する手順は以下の通りです。 1. VSCodeを開き、`Ctrl+Shift+P`でコマンドパレットを開きます。 2. 「Terminal: Select Default Profile」と入力し、そのコマンドを選択します。 3. 表示されたリストから「Git Bash」を選択します。[1][4] または、設定ファイル(`settings.json`)を直接編集する方法もあります。 1. `Ctrl+,`でSettings画面を開くか、`Ctrl+Shift+P`でコマンドパレットを開き、「Preferences: Open Settings (JSON)」と入力して選択します。[3] 2. `"terminal.integrated.defaultProfile.windows": "Git Bash"`を追加します。[1] これで、VSCodeを再起動するか新しいターミナルを開くと、デフォルトでGit Bashが起動するようになります。 補足として、Git Bashを使う理由は、WindowsでGitを操作する際に便利だからです。PowerShellではGitコマンドが使えない、あるいは不自然な動作をすることがあるためです。[2] Citations: [1] https://zaki-hmkc.hatenablog.com/entry/2023/03/23/002945 [2] https://ralacode.com/blog/post/use-git-bash-on-vscode/ [3] https://qiita.com/daikiozawa/items/48a9fe0e2898c7dd78ae [4] https://zenn.dev/unsoluble_sugar/articles/362a17a7f57020 [5] https://zenn.dev/valmet083/articles/389fc86c072ed8
aaattt
1,875,428
What are the PSD2 & SCA Requirements?
As digital transactions continue to rise, securing these transactions is very important. The European...
0
2024-06-03T13:25:48
https://www.corbado.com/blog/psd2-sca-requirements
cybersecurity, webdev, regulations, psd2
As digital transactions continue to rise, securing these transactions is very important. The European Union introduced the **Payment Services Directive 2 (PSD2)** to enhance payment security and establish a unified framework for electronic payments across Europe. A crucial component of PSD2 is **Strong Customer Authentication (SCA)**, which mandates multi-factor authentication to reduce fraud and protect consumers. This blog analyzes the legal and technical implications of PSD2 and SCA, highlighting their impact on payment security and the role of passkeys. **_[Read full blog post here](https://www.corbado.com/blog/psd2-sca-requirements)_** ## The Legal Foundation of PSD2 and SCA Let’s start by having a look at the legal foundation. ### Directive (EU) 2015/2366 (PSD2) PSD2, adopted in 2015, set the stage for stronger payment security in the EU. It introduced the concept of SCA, requiring payment service providers to implement multi-factor authentication. The directive ensures that whenever users access their payment accounts online, initiate electronic payments, or perform any action that might involve a risk of fraud, they must undergo SCA. ### Regulatory Technical Standards (RTS) In 2018, the European Commission further detailed these requirements through Delegated Regulation (EU) 2018/389, known as the Regulatory Technical Standards (RTS) on SCA. These standards specify that SCA must involve two or more elements from distinct categories: knowledge (something only the user knows), possession (something only the user possesses), and inherence (something the user is). ### Role of the European Banking Authority (EBA) The EBA plays a crucial role in ensuring consistent application of these standards across EU member states. Through opinions, guidelines, and the Single Rulebook Q&A, the EBA provides clarity and guidance on implementing SCA, helping national regulators and market participants adhere to these stringent requirements. ### Strong Customer Authentication (SCA) Requirements SCA mandates that financial institutions implement at least two of the three authentication elements: - **Knowledge:** Examples include passwords or PINs. - **Possession:** This could be a smartphone or a hardware token. - **Inherence:** Biometric factors such as fingerprints or facial recognition. The goal is to ensure that even if one element is compromised, the others remain secure, thereby protecting the user’s data and transactions. ## Dynamic Linking An essential aspect of SCA is dynamic linking, which ensures that transaction details are bound to the authentication process. This means that any change in the transaction amount or recipient will invalidate the authentication, thereby mitigating fraud risks. ## European Banking Authority (EBA) Opinions The EBA has issued several opinions to address ambiguities and provide further guidance: - **EBA Opinion 2018:** Clarified that SCA must involve two elements from different categories, effectively making it a true two-factor authentication (2FA). - **EBA Opinion 2019:** Emphasized the requirement for distinct authentication factors and provided examples of compliant and non-compliant implementations. These opinions, while not legally binding, are highly authoritative and significantly influence how national regulators and financial institutions implement SCA. ## Key Takeaways for Passkey Implementation Passkeys offer a robust solution for meeting SCA requirements. By leveraging biometric authentication (inherence) and secure device-bound elements (possession), passkeys can provide a seamless yet secure user experience. They eliminate the need for traditional passwords, which are often a weak link in the authentication process. ### For Developers: Implementing passkeys can streamline the authentication process, enhance security, and improve user experience. Detailed guides and SDKs are available to help developers integrate passkeys into their applications swiftly. ### For Product Managers: Understanding the benefits of passkeys can aid in decision-making regarding user authentication strategies. Passkeys not only comply with SCA requirements but also reduce friction during user onboarding and transactions, potentially increasing user retention and satisfaction. ## Conclusion PSD2 and SCA represent significant strides towards securing electronic payments in the EU. By mandating multi-factor authentication and incorporating dynamic linking, these regulations aim to reduce fraud and protect consumers. Passkeys, with their robust security features, align well with these requirements, offering a future-proof solution for authentication. Find out more on [Corbado’s Blog](https://www.corbado.com/blog/psd2-sca-requirements).
vdelitz
1,875,425
Efficiency Improvements in Oil and Gas Extraction
Effectiveness progress in Oil and Gas Extraction: the innovation that was game-Changing Oil and Gas...
0
2024-06-03T13:24:03
https://dev.to/jonathan_hulseh_7a93b8f43/efficiency-improvements-in-oil-and-gas-extraction-59ke
Effectiveness progress in Oil and Gas Extraction: the innovation that was game-Changing Oil and Gas Extraction reduction is often a procedure through which fuels being fossil obtained with the smashed. This method could possibly be difficult plus dangerous, which is why brand that is tech which try being become developed creating it far better, safer, plus affordable. Options that come with effectiveness progress in Oil and Gas Extraction Effectiveness progress in Oil and Gas Extraction elimination has genuine number which are wide of. Firstly, they enhance effectiveness, this means quite a bit oils plus coal can be had in the volume that are very same of. Next, they bring down any costs, which ensures that gas plus natural oils try eliminated at a high price which was lowered. Thirdly, they boost security, meaning less accidents occur on gas plus natural oils rigs. Protection Security is actually crucial in Oil and Gas Extraction. Here is the justification effectiveness progress take place beautifully made with safety in your mind. An illustration with this particular will be the use of automation tech, which reduce the necessity for individuals intervention. Another example could be the use of remote sensors, that could recognize whenever something goes wrong plus operators alert that was being damage are finished. Innovation in Effectiveness Adjustments Effectiveness progress in petrol Oil and Gas Extraction that are natural is consistently evolving. New innovations are increasingly being developed all the time which is most beneficial, meaning the whole procedure of reduction is currently safer and many other things efficient. For instance, one innovation that are latest the effective use of drones for monitoring plus rigs which can be inspecting. This allows operators to acknowledge conditions that can be done they become serious. Usage of Effectiveness Progress Effectiveness alterations in Oil and Gas Extraction natural oils reduction are utilized in a range that is genuine ways. A couple of examples Drilling & Workover Rigs that was incorporate treatment, plus transportation. one use which will be important of improvements is the processing of oils plus coal. This involves the separation of varied elements to be correctly employed for different specifications. Using Effectiveness Progress Using effectiveness improvements is difficult since effortless according to the technology used. Some of the tech was automated plus want intervention which is minimal was individuals even though some want significant classes before they could correctly be used plus effortlessly. It is important to follow all protection directions plus directions furnished by producer to prevent accidents plus guarantee triumph. Service plus Quality of Effectiveness Progress Effectiveness adjustments in Oil and Gas Extraction and Blowout Preventer(BOP) are manufactured to generate solution that has been top-notch. This implies they are typically developed to last plus they are able to operate below harsh circumstances. In selection, effectiveness adjustments are manufactured to improve the caliber of petrol plus natural oils eliminated. This implies the conclusion which was ultimate try cleaner, safer, and a lot more valuable. Application of Effectiveness Alterations Effectiveness alterations in petrol Oil and Gas Extraction elimination are used in a genuine range different applications. Some of the most applications which are often drilling that is typical production, transportation, and processing. Every application requires types of effectiveness improvements, and therefore services need to stay utilising that is up-to-date technologies which was current manage to remain competitive. effectiveness adjustments in Oil and Gas Extraction Products treatment was innovations that are game-changing an array of value, like increasing effectiveness, reduced prices, plus safety which was improved. Using the developing that are constant of technologies, they are typically particular to revolutionize the coal plus oils areas for a while which was very long the near future. Source: https://www.cngongboshi.com/drilling--workover-rigs
jonathan_hulseh_7a93b8f43
1,866,271
Cloud Solutions for Insurance Organizations
Improving Cloud Infrastructure Outcomes for Carriers and All-Type Organizations Insurance...
0
2024-06-03T13:24:00
https://dev.to/brainboard/cloud-solutions-for-insurance-organizations-10eh
cloud, infrastructureascode, insurance, iot
## Improving Cloud Infrastructure Outcomes for Carriers and All-Type Organizations Insurance organizations, including carriers and various service providers, need robust cloud solutions to scale effectively, reduce costs, and maintain a competitive edge. Brainboard offers innovative cloud solutions tailored to meet the unique demands of the insurance industry. ## Scale Effectively Your Cloud Infrastructures ### **Have the Competitive Advantage** Carriers that can scale effectively, reduce costs, and offer competitive prices gain a significant competitive advantage. Brainboard facilitates rapid configuration and integration, enabling carriers to innovate and adapt quickly in the cloud environment. ### **Support All Along** ![configure aws resource](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r57y084i0k9vilojopu.png) From pre-migration planning to execution and post-migration optimizations, Brainboard has managed numerous cloud transformations for clients on Azure, OCI, AWS, and GCP. We provide comprehensive support throughout the entire migration process. ### **Think Visual** ![visualize terraform](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jif0vkkiwbik7b0kb2vc.png) Only 23% of insurance respondents report benefiting from tools and technology that integrate industry-specific common data models, connectors, and APIs. Brainboard offers a new visual approach to designing, deploying, and managing cloud infrastructure, enhancing understanding and efficiency. ### **All-in-One-Place Best Practices** ![CICD automation designer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ps3k10sqbmfx1peiwp3.png) Insurers often have extensive legacy technology stacks. Establishing realistic short- and long-term strategies is crucial for success from day one. Brainboard helps you assess your current Terraform maturity level, understand the scale of changes ahead, and create a logical action plan. ### **Data Governance for Sensitive Policyholder Information** ![cloud collaboration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87mxg2vca2pkcsd40atf.png) Insurers manage vast amounts of sensitive policyholder information, making strong cybersecurity essential. Brainboard specializes in helping insurers assess and improve their cloud security architecture, covering application and integration, data and privacy, identity and access management, architecture engineering and hardening, and staff preparedness. Our comprehensive approach ensures technical and operational resilience, building trust with customers and regulators and enabling business growth. ## Ensure Your Infrastructure is Always Up-to-Date with Brainboard Stay ahead of the curve with Brainboard's cutting-edge cloud solutions, designed to keep your infrastructure up-to-date and secure. [Talk to our experts](https://www.brainboard.co/contact-us) to learn how Brainboard can transform your insurance organization's cloud infrastructure.
miketysonofthecloud
1,875,385
création d’application Android et iOS avec Flutter
Flutter est un Framework open-souce créé par Google. Il permet de développer des applications pour...
0
2024-06-03T13:21:32
https://dev.to/media-web-services/creation-dapplication-android-et-ios-avec-flutter-5568
flutter, android, ios
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qazt35ujhie3788j1kvc.jpg) [Flutter](https://flutter.dev/) est un Framework open-souce créé par Google. Il permet de développer des applications pour Android et iOS à partir d'un seul code. Cet outil facilite le développement d'applications mobiles. Il offre une expérience utilisateur native tout en utilisant un seul code source. Cet article présente les fonctionnalités de Flutter, ses avantages sous la direction de [Média Web Services](https://mws-services.com/). ## Qu’est-ce que Flutter ? Flutter est un Framework de développement d’applications mobiles qui utilise le langage de programmation Dart. Lancé par Google en 2017, Flutter permet de créer des applications mobiles performantes et esthétiques pour Android et iOS avec un seul code source. Il se distingue par son rendu graphique haute performance et son architecture réactive. ## En quoi Flutter facilite-t-il le développement des applications ? ### Développement d’applications natives et développement d’applications multiplateformes Le développement d’applications mobiles peut se faire de deux manières : via le développement d’applications natives ou via le développement d’applications multiplateformes. Flutter se situe dans cette dernière catégorie, offrant des avantages significatifs par rapport aux méthodes traditionnelles. **Le développement d'applications natives** nécessite la création de deux applications séparées : une pour Android utilisant Java ou Kotlin, et une autre pour iOS utilisant Swift . Cette approche assure une performance et une intégration optimales avec chaque plateforme, mais elle prend plus de temps et de ressources. **Développement d’applications multiplateformes** Flutter permet de créer une seule application qui fonctionne sur plusieurs systèmes d'exploitation, comme Android et iOS. Cette méthode réduit grandement le temps et l'argent nécessaires au développement tout en offrant une expérience utilisateur de grande qualité. ## Avantages de Flutter - **Code Source Unique :**Un des principaux avantages de Flutter est qu'il permet d'utiliser un seul code source pour créer des applications Android et iOS. Cela réduit le temps de développement et simplifie la maintenance. - **Performances Élevées :** Flutter offre une expérience utilisateur fluide et réactive, comparable à celle des applications natives. - **Support de la Communauté et Documentation :** Flutter bénéficie - **d’une communauté active** et d’une documentation exhaustive, facilitant l’apprentissage et le développement pour les nouveaux utilisateurs. - **Hot Reload :** La fonctionnalité **Hot Reload** de Flutter permet aux développeurs de voir immédiatement les modifications apportées au code sans avoir à redémarrer l’application. La création d'une application Android et iOS avec Flutter offre des avantages significatifs en termes de performances et de coûts. Avec l'expertise de[ Media Web Services](https://mws-services.com/), cette démarche devient encore plus efficace. Leur maîtrise de Flutter garantit un développement fluide et une expérience utilisateur optimale. En collaborant avec [Media Web Services](https://mws-services.com/), les entreprises peuvent rapidement mettre sur le marché des applications de haute qualité, adaptées à toutes les plateformes, et ainsi se démarquer dans un environnement mobile concurrentiel.
media-web-services
1,875,390
Optimism Retro Funding Round 4: All you need to know
Optimism is a Layer 2 scaling solution that transcends being just a simple blockchain protocol. As a...
0
2024-06-03T13:20:15
https://dev.to/modenetwork/optimism-retro-funding-round-4-all-you-need-to-know-27bi
Optimism is a Layer 2 scaling solution that transcends being just a simple blockchain protocol. As a Collective, It fosters a collaborative environment, which tackles various technical aspects of a decentralized ecosystem. This Collective is building the foundation for next-generation applications on a network of blockchains called a "[Superchain](https://app.optimism.io/superchain/)." The [OP mainnet](https://optimistic.etherscan.io/), a high-throughput scaling solution for Ethereum, serves as the core of this Superchain. Developers can leverage the [OP Stack](https://docs.optimism.io/stack/getting-started), a collaborative knowledge base, to streamline development on the Superchain. A strong community backs this technical framework and [OP governance](https://community.optimism.io/docs/governance/#) empowers the community to participate in shaping the project's future through a sophisticated voting system. Optimism is a powerful combination of technical infrastructure and a community working together to push the boundaries of the blockchain ecosystem. ## Optimism Retro funding ![retro funding general pic](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yz697fsvi6i3xj3dfy4a.png) Optimism Retro funding, initially called Retroactive Public Goods Funding (RPGF), aims to financially reward those who contribute to the growth of the Superchain ecosystem. Round 4 is currently open for application and it focuses on onchain builders, rewarding those building products, tools, or infrastructure that place a demand on OP blockspace and provide value to the ecosystem ![virtuous cycle for rpgf](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/okww2mpx0fy3pf86fkwx.png) ## Key Dates 1. Sign up: May 23rd - June 6th 2. Application Review Process: June 6th - June 18th 3. Voting: June 23rd - July 8th 4. Results & Grant delivery: July 15th ## What are the Eligibility Criteria? The Eligibility criteria are a set of standards designed to restrict the influx of low-impact projects while maintaining a balance between broad inclusivity and the conditions required to accept projects with significant impact for the funding round. Eligible projects should have to; - Deploy their onchain contracts on one or multiple of the following OP chains: OP Mainnet, Base, Zora, Mode, Frax, and Metal, and meet the following requirements: Onchain contracts have interactions from 420 unique addresses from Jan 1st - May 1st, 2024 Onchain contracts had their first transaction before April 1st, 2024 Onchain contract had more than 10 days of activity from Jan 1st - May 1st, 2024 Verify their onchain contracts in the Retro Funding sign-up process. - Make their contract code available in a public GitHub repo, for which ownership has been verified in the Retro Funding sign-up process. - Confirm that they will comply with Optimism Foundation KYC requirements and do not reside in a sanctioned country. - Submit a Retro Funding application before June 6th, 2024, and comply with application rules. ## What would you be rewarded for? The Retro Funding round 4 is focusing on onchain builders who have made a significant impact from October 2023 - June 2024. In no particular order, below are some of the metrics that would be considered. 1. Demand generated for Optimism blockspace. 2. Interactions from new Optimism users. 3. Open source license of contract code. 4. Repeated Interactions from Optimism users. 5. Interactions from Optimism users with high trust scores. ## How do you apply? For the application process, there is a possibility of making errors or changes after applying so it would be best to make your submission as early as possible to allow you to make changes and corrections later on. 1. Go to [Optimism Retrofunding](https://retrofunding.optimism.io/) and Sign in with [Farcaster](https://www.farcaster.xyz/). ![Optimism Retro funding](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tusu3d18g77u8bv8qvzb.png) 2. Add a project to your profile. ![Profile retro funding](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkloxoo5g3sxwy7548ij.png) Note the Following while filling in the details for your project; - Your project’s contract must be deployed on Mode or any other chains in the superchain. - To verify ownership of the contract you have to sign a message from the deployer wallet of the contract. - To prove ownership of your project's Github Repository you’ll have to push a generated JSON to your default branch. - You can add your teammates to the project with their Farcaster profiles. - Add an email 3. After filling in all your details, do a crosscheck then apply! You should get an email to confirm your application. ## Resources Below are some extensive resources to check out. [Retrofunding 4 Round details](https://gov.optimism.io/t/retro-funding-4-onchain-builders-round-details/7988) by Optimism [Retrofundng 4 Application process](https://gov.optimism.io/t/retro-funding-4-application-process/8013) by Optimism [A quick guide to Optimism Retro Funding 4](https://www.youtube.com/watch?si=kyNYVSwPav6tSRrW&v=_JAWk2IHNn0&feature=youtu.be) by [Koha](http://@kohawithstuff) This post was written by Revealer, tech cooperator @ Mode.
modenetwork
1,875,383
Stop Using Docker in Docker for Testing
Embrace Kubernetes to Improve Performance, Scalability, and Ease of Configuration for your...
0
2024-06-03T13:17:53
https://testkube.io/learn/stop-using-docker-in-docker-for-testing
kubernetes, devops, docker, testkube
### Embrace Kubernetes to Improve Performance, Scalability, and Ease of Configuration for your Tests Testing applications is becoming increasingly difficult as they grow in size and complexity. For someone responsible for testing an application of that scale, simply running tests within individual containers doesn't help much. Hence, there is a need for a scalable and flexible approach that allows for more efficient and effective testing to catch those pesky bugs. One technique that gained popularity for testing distributed applications was using Docker-in-Docker or DinD. The idea was to run a Docker daemon inside a Docker container itself and then use that inner Docker environment to orchestrate the full application stack on a single machine. Initially, it provided a seemingly straightforward way to create realistic multi-container environments for integration testing. However, as the applications grew, teams realized their substantial drawbacks and limitations, from resource constraints and overheads to inconsistencies. If you're responsible for testing in your organization, you can relate to the above scenario very well. Testing in Docker in Docker comes with its issues, and for large applications, it can become a nightmare. In this blog post, we'll look at the challenges with Docker in Docker approach and why you should embrace Kubernetes for Testing. ## Docker in Docker 101 (DinD) But before we further delve into this blog post, a short primer on Docker in Docker (DinD) is necessary. As the name suggests, it's running a docker container inside a docker container. The approach [first surfaced in late 2013](https://www.docker.com/blog/docker-can-now-run-within-docker/), and it soon gained popularity. One common use case of DinD is continuous integration (CI) pipelines. The CI system could start up a DinD container as part of the build process. This inner Docker is then used to pull application images, start their containers, and run tests within that isolated environment. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oia113q8tedl37lafj5i.png) While DinD can be convenient for certain workflows as mentioned above, it introduces a range of complexities and limitations. Let us discuss them in the following section. ## Problems with Running Docker In Docker Docker-in-Docker seemed like a convenient solution for testing distributed applications initially, but the reality is that this approach has significant drawbacks that become more evident as your applications grow in complexity. From a testing standpoint, DinD introduces several challenges that can undermine the very goals you're trying to achieve. ### Inefficient Performance One of the major drawbacks of the DinD approach is its sub-optimal or inefficient performance, especially at scale. The DinD testing approach hogs your resources if not managed correctly. Because each docker daemon reserves a set of resources, including CPU, RAM, networking, etc, on a system with limited resources, this overhead can quickly lead to performance bottlenecks and instability while testing. ### Inability to Mirror Production Setup Testing an application in a production-like environment is vital. However, the DinD environment fails to accurately mimic your actual production setup. The nested container environment may exhibit different behavior due to different storage, networking, and security configurations that may not be similar to your production environment. For instance, your production containers may run on a different Docker runtime (containerd, crio etc) compared to what DinD provides by default. ### Docker Security Concerns When you have applications dealing with sensitive data, you need to be extremely cautious when testing with the DinD approach, as it can open up new vulnerabilities and security risks if not configured properly. By default, the Docker daemon runs with root privileges, if not configured correctly, it could expose your application data or increase security loopholes. Furthermore, concerning multi-cluster/tenant setup, there are security concerns of isolation between the different nested Docker instances, potentially allowing cross-tenant data leaks and vulnerabilities. ### Docker Scalability Concerns With DinD, you end up with limited resources available on a single host machine. Even if multiple DinD instances are utilized, efficiently utilizing resources and automating scaling becomes a difficult process without an orchestrator. The lack of out-of-the-box scaling options severely limits the ability to conduct rigorous integration, load, and performance tests. Teams struggle to accurately model real-world scale, resiliency scenarios, and sophisticated deployment requirements. From performance overheads to security and scaling concerns, the technical debt of using DinD stacks up rapidly, and continued usage of this approach may lead to maintenance overhead. However, the major concern is that DinD severely restricts your ability to conduct comprehensive integration, load, and other performance tests on a large scale. Even the creator of DinD asks you to [think twice before using DinD for your CI or testing environment.](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) ## Why Embrace Kubernetes For Testing? As testing needs evolve, the Docker in Docker testing approach fails to provide the scalability, flexibility, and resilience that modern distributed systems demand. This realization led us to explore Kubernetes as a more robust solution for modeling and testing our cloud-native applications. Let us look at how embracing Kubernetes for Testing overcomes the issues with DinD. ### Built-in Scalability Kubernetes comes with built-in orchestration capabilities that enable on-demand scaling, which is impossible with the DinD approach. It provides mechanisms like horizontal pod autoscaling, cluster autoscaling, and the ability to automatically schedule containers across a pool of nodes. This allows you to model realistic load scenarios by scaling application components using simple commands or auto-scaling rules based on CPU/memory consumption. ### Consistent Production-Mirrored Environments The core goal of any testing process is to ensure testing takes place in an environment that closely resembles the production environment. Today, Kubernetes is dubbed the OS of the cloud and has become a standard for deploying and managing applications on the cloud. This allows your test clusters to accurately mirror your production environment, leading to effective testing. ### Simplified and Efficient Kubernetes is known to be a complex tool to master and involves a steep learning curve at the start. However, as you progress through the Kubernetes journey, things become simpler. The rise of cloud-native CI/CD solutions built specifically for Kubernetes has further contributed to standardizing testing workflows. Instead of building custom pipelines, teams can leverage pre-built integrations for spinning up test clusters, running test suites within live environments, collecting telemetry data, and more. These were just a few reasons why we should embrace Kubernetes for testing. It brings in a lot of advantages that make your testing processes more efficient. However, one of the challenges when it comes to testing in Kubernetes is the lack of Kubernetes native testing tools and frameworks, which limits your ability to take full advantage of Kubernetes. This is where Testkube comes in. ## Run Your Tests in Kubernetes With Testkube Being the only Kubernetes native test execution and orchestration framework, Testkube takes your testing efforts to the next level. It helps you to build and optimize your test workflows for Kubernetes. It has a suite of advantages, some of which are listed below: - You can plug in any testing tool to Testkube, make it Kubernetes-native and take full advantage of Kubernetes. Check the complete list of [Testkube integrations](https://testkube.io/integrations). - With Testkube you can easily create and manage complex test scenarios across multiple environments. - Testkube [integrates with a suite of CI/CD tools ](https://testkube.io/learn#ci-cd)to enhance your DevOps pipelines by enabling complete end-to-end testing of your applications. Moving your testing framework from DinD to Kubernetes might seem intimidating, but with Testkube, it's a breeze. As mentioned earlier, Testkube offers a [suite of features](https://testkube.io/pricing#features) along with tools and integrations that streamline test deployment, execution and monitoring within Kubernetes ensuring your tests are as close to the production environment as possible. ### Using Testkube Test Workflows Testkube allows you to create end-to-end test workflows using your favorite testing tool. It also streamlines your testing process by decoupling test execution from your CI pipeline. This democratizes the testing process, allowing teams to execute tests on-demand without disrupting the CI workflow. By separating test execution from the CI pipeline, Testkube offers greater flexibility and control over the testing process. In this section, we'll examine how to create a test workflow using Testkube. We'll create one for Postman, but you can configure a workflow for virtually any testing tool. ### Creating A Test Workflow Using Postman In order to create a test workflow, you need to have a cluster running along with the Testkube agent. When you have this running, you can log in to the Testkube dashboard, click on Test Workflows and "Add A New Test Workflow" and choose "Start From An Example" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yzqde75d74shwkg2vl6f.png) By default, a k6 test spec is provided. We'll be using Postman instead. To do so, click on the "Discover more examples" link, which will take you to the [Test Workflows document](https://docs.testkube.io/articles/test-workflows/#example---test-workflow-for-postman). Scroll down to the Postman section and copy the code present there. You can also use the one shown below: ```yaml apiVersion: testworkflows.testkube.io/v1 kind: TestWorkflow metadata: name: postman-workflow-example # name of the Test Workflow spec: content: git: # checking out from git repository uri: https://github.com/kubeshop/testkube revision: main paths: - test/postman/executor-tests/postman-executor-smoke-without-envs.postman_collection.json container: # container settings resources: # resource settings (optional) requests: # resource requests cpu: 256m memory: 128Mi workingDir: /data/repo/test/postman/executor-tests # default workingDir steps: # steps that will be executed by this Test Workflow - name: Run test run: image: postman/newman:6-alpine # image used while running specific step args: # args passed to the container - run - postman-executor-smoke-without-envs.postman_collection.json ``` Paste this in the spec section and click Create. This will create the test workflow and take you to the workflow page. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekkc9ntp5ku92axmepx4.png) ### Executing a Test Workflow Click on "Run Now" to start the workflow execution. Clicking on the respective execution will show you the logs, artifacts, and the underlying workflow file. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xuoa5muli5kdhtjo7epb.png) Testkube Workflows preserve the logs and artifacts generated from the tests, thus providing better observability. Creating a test workflow is fairly straightforward, as shown above, and you can bring your testing tool and create a workflow for it with a few clicks. ## Summary The move from Docker in Docker to Kubernetes for testing is the need of the hour, especially considering the scale and complexity of our applications. By leveraging Kubernetes for testing, we take advantage of its inherent capabilities of scaling, security and efficiency to name a few. And with tools like Testkube, organizations can achieve more reliable, efficient, and cost-effective testing processes. If you are ready to elevate your testing strategy with Kubernetes and [Testkube ](http://testkube.io)feel free to visit our [website](http://testkube.io) to learn more about Testkube's capabilities and how it can transform your testing workflow.
michael20003
1,875,525
Next Gen User Experiences – Vercel Ship 2024
Jared Palmer presented on: Next Gen User Experiences The idea is to: Give chatbots rich...
0
2024-06-04T11:56:39
https://blog.jonathanflower.com/artificial-intelligence/next-gen-user-experiences-vercel-ship-2024/
ai, softwaredevelopment, codingtools, genai
--- title: Next Gen User Experiences – Vercel Ship 2024 published: true date: 2024-06-03 13:17:46 UTC tags: ArtificialIntelligence,SoftwareDevelopment,AI,CodingTools,GenAI canonical_url: https://blog.jonathanflower.com/artificial-intelligence/next-gen-user-experiences-vercel-ship-2024/ --- ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tklb3o6uzbnk5hvs8vkf.png) [Jared Palmer](https://x.com/jaredpalmer) presented on: > Next Gen User Experiences The idea is to: > Give chatbots rich component based interfaces with what we’re calling Generative UI This is supported by the power of [Vercel’s AI SDK](https://sdk.vercel.ai/docs/introduction) and [React Server Components](https://react.dev/reference/rsc/server-components) See it in action: [![✂](https://s.w.org/images/core/emoji/15.0.3/72x72/2702.png) Generative UI – interactive schedule component fully wired into the chat experience](https://youtube.com/clip/UgkxqLkH7fNdoVbZPTqV15o2vCeETk8y7ZRh?si=_EPT3KRfPs6Qn9KW) This is the future because it greatly simplifies the interaction. Rather than having a different AI bot integrated into each app, my favorite apps are integrated into a single chat interface. This way the AI has the context it needs to act on requests such as “book a meeting with Lee @2pm today to talk about the movie “Her”.
jfbloom22
1,875,381
Precision Manufacturing: Quanzhou Longxing Quansheng Trading Co., Ltd's Focus
0b6d564137f67dce6ec4a97a9099f1e36cc90bef64a60f69be2e3c68da0b7574.jpg What Precision...
0
2024-06-03T13:15:33
https://dev.to/jonathan_hulseh_7a93b8f43/precision-manufacturing-quanzhou-longxing-quansheng-trading-co-ltds-focus-bje
0b6d564137f67dce6ec4a97a9099f1e36cc90bef64a60f69be2e3c68da0b7574.jpg What Precision Manufacturing? Precision manufacturing, used when machines make products to specific specifications. This means every item made exactly the like the one before it. The machines are used to create these items incredibly accurate and precise, which ensures the items of the quality highest. Advantages of Precision Manufacturing The advantages of precision manufacturing is numerous. First, it enables companies to quickly efficiently make items and. Second, it ensures the items made of the quality highest. Third, it ensures the items consistent. Fourth, it decreases waste and costs. And fifth, it makes it easier to produce custom designs. Innovation in Precision Manufacturing Innovation is key to precision manufacturing. New technologies are constantly being developed to create the process much faster, more efficient, and more accurate. For instance, PINBUSHING FOR agricultural machine can be configured to currently make products automatically, without the need for human treatment. This conserves time and decreases mistakes. Additionally, new products are being developed more powerful and more durable compared to ever before. Safety in Precision Manufacturing Safety of utmost is importance in precision manufacturing. The machines are used must be operated by trained experts to ensure accidents don't occur. Additionally, the products are used must be handled carefully to avoid injury. Companies use accuracy in manufacturing must have safety stringent in position to protect their employees. How to Use Precision Manufacturing Precision manufacturing is used in a range wide of, from automotive to aerospace. The process can be used to create a variety of products, consisting of components, components, and finished products. A business must first determine the specs for the item they want to make to use precision manufacturing. They must after that program the machines to create the item to those specs. Once the track chain are configured, they can be readied to work, and the item can be produced. Service in Precision Manufacturing Service is a best important part of manufacturing. Companies use precision manufacturing must have the ability to provide support to their customers. This consists of assistance with design, fixing, upkeep, and repair. Companies use precision manufacturing must have the ability to also provide delivery prompt of items to their customers. Quality of Precision Manufacturing Quality is one of the most aspect important of manufacturing. Companies use precision manufacturing must ensure their items of the quality highest. This means they must be devoid of problems, consistent, and durable. To ensure quality, companies must use the very best products and devices, and they must follow quality strict procedures. Application of Precision Manufacturing Precision manufacturing is used in a range wide of. It can be used to create parts for TCAK TRACK CHAIN FOR agricultural machine, components for electronic devices, and finished products for customers. Some instances of products are made precision using consist of planes, cars, mobile phone, and surgical devices. Source: https://www.loonsin.com/application/PINBUSHING-FOR-agricultural-machine
jonathan_hulseh_7a93b8f43
1,875,380
Exploring the Distinctive World of Refurbished Laptops
In an age in which generation advances at breakneck tempo, having a laptop has grow to be an...
0
2024-06-03T13:12:39
https://dev.to/liong/exploring-the-distinctive-world-of-refurbished-laptops-b42
budgetfriendly, kualalumpur, laptop, software
In an age in which generation advances at breakneck tempo, having a laptop has grow to be an fundamental device for each paintings and enjoyment. However, the hefty fee tags related to emblem-new models frequently pose a barrier for lots capability buyers. This is in which refurbished laptops step in, offering a budget-pleasant possibility with out compromising on satisfactory. If you are taking into account investing in a refurbished pc, this complete guide is your roadmap to growing an informed selection.. **Define the Refurbished Laptop** A refurbished laptops is a pre-owned device that has been back to the manufacturer or a certified refurbished due to a defect, trade-in, or definitely because the unique buyer modified their mind. These laptops go through rigorous testing, maintenance, and first-rate assurance approaches to make sure they meet high standards earlier than being resold. **The Advantages of Opting for Refurbished Laptops** **1. Cost Savings** One of the primary blessings of buying a refurbished laptop is the widespread fee financial savings. Refurbished laptops are commonly sold at a fraction of the charge of recent ones, making exquisite era handy to a broader target market. This is specifically useful for college kids, small businesses, and finances-conscious individuals. **2. Environmental Impact** Opting for a refurbished laptops is an green choice. By purchasing refurbished, you are helping to lessen electronic waste and the demand for brand spanking new resources. This contributes to a greater sustainable environment by extending the lifecycle of present era. **3. Quality Assurance** Refurbished laptops from reliable sellers go through rigorous trying out and exceptional assessments. These strategies make certain that the laptops are restored to a like-new circumstance, often inclusive of the substitute of defective components and thorough cleansing. Many refurbished laptops include warranties, supplying peace of mind to buyers. **Key Considerations When Purchasing Refurbished Laptops** **1. Source Reliability** When deciding on a refurbished laptop, it is vital to select a relied on dealer or refurbished with a established song document of first-class and reliability. Opt for respectable brands or licensed refurbishment packages to make certain transparency, authenticity, and comprehensive after-sales guide. **2. Warranty Coverage** Prioritize refurbished laptops that include guarantee insurance, supplying safety towards capacity defects or malfunctions. A strong assurance demonstrates the seller's confidence in their merchandise and gives peace of mind to buyers, safeguarding their investment. **3. Return Policy** Familiarize yourself with the seller's return coverage to recognize your alternatives in case the refurbished laptops would not meet your expectations. A flexible and transparent return policy lets in for hassle-free returns or exchanges, making sure client satisfaction and minimizing risks. **4. Condition Assessment** Carefully examine the circumstance of the refurbished laptops before creating a buy. While minor cosmetic imperfections are commonplace, ensure that the tool's capability and performance meet your necessities. Detailed circumstance descriptions supplied with the aid of the seller can aid in making an informed decision. **5. Customization Options** Explore customization alternatives supplied with the aid of refurbishes, along with hardware enhancements or software program installations, to tailor the refurbished laptop to your precise wishes and possibilities. Upgrading components like RAM or garage capability can decorate overall performance and extend the tool's lifespan. **Popular Brands and Models** **1. Apple MacBook** Refurbished Apple MacBooks are notably trendy for their build best, overall performance, and design. **2. Dell Latitude and XPS** Dell's Latitude and XPS series are popular choices for refurbished laptops. The Latitude collection is known for its durability and organisation features, even as the XPS collection offers excessive-stop performance and smooth layout. **3. HP EliteBook and Spectre** HP's EliteBook collection is designed for enterprise experts, imparting robust safety features and durability. The Spectre collection, on the other hand, is known for its top class layout and performance, making it a super preference for personal use. **Additional Considerations** **1. Energy Efficiency** Refurbished laptops often have strength-green components, which could reduce energy consumption and decrease application payments. Many refurbished models additionally comply with strength standards and certifications, making them an green desire. **2. Thorough Cleaning and Refurbishment Process** Refurbished laptops go through a meticulous cleansing system to ensure they may be unfastened from dust and particles. Internal additives are wiped smooth to prevent overheating and ensure pinnacle-excellent normal overall performance. The refurbishment method additionally includes repairing or changing defective additives, upgrading hardware if essential, and reinstalling the operating device to make sure the pc functions like new. **3. Software and Hardware Upgrades** Many refurbishes offer the choice to upgrade each the hardware and software of the laptops. You can request extra RAM, a larger tough force, or maybe a better pix card. Software upgrades would possibly encompass installing the modern day running machine or productiveness software, enhancing the laptop's capability and lifespan. **4. Specialized Refurbishes** Some corporations focus on refurbishing laptops from specific manufacturers or sorts, along with gaming laptops, business laptops, or educational laptops. These specialized refurbishes have expertise in restoring laptops to their peak overall performance, frequently imparting prolonged warranties and help. **Conclusion** Buying a refurbished pc can be a clever and fee-effective preference, imparting fantastic generation at a fragment of the price of recent devices. By information the benefits and considering vital elements consisting of the popularity of the vendor, assurance, circumstance, and improve options, you may make an informed purchase that meets your desires. Whether you are a student, a professional, or a person seeking out a dependable laptops without breaking the financial institution, refurbished laptops offer an first-rate solution.
liong
1,875,379
How to use Nodies for your Dapp
As Web3 becomes more streamlined and decentralized app development increases, the need for...
0
2024-06-03T13:11:25
https://dev.to/mozes721/how-to-use-nodies-for-your-dapp-4g21
webdev, web3, blockchain, react
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a6uh6xt6daduddumhsmq.png) As Web3 becomes more streamlined and decentralized app development increases, the need for lightning-fast gateway providers to decentralized blockchain data also grows. Nodies addresses this need by offering efficient and reliable access to blockchain data. --- ## How does Nodies archive it? In a nutshell Nodies is part of an ongoing partnership with the Pocket Network Foundation more you can find here.  It has implemented public RPC endpoints utilizing the Pocket Network's decentralized infrastructure for relaying RPC requests. > Remote Procedure Call (RPC): A protocol that enables one program to execute a procedure or service in another program on a different machine within a network. In Blockchain it facilitates communication and data exchange between nodes, clients, and servers in the blockchain network. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mv71ig7snawwse2nn5sf.png) ## Integrate Nodies to your NextJS dapp As a quick setup will create a barebone wallet connect using 'wagmi' package including RPC URL provided by Nodies. Lets start creating bare 🦴 NextJs app with `npx create-next-app@latest` Then just follow the guide from Family ConnectKit docs here. ``` const config = createConfig( getDefaultConfig({ chains: [mainnet, polygon, avalanche], transports: { [mainnet.id]: http( `https://lb.nodies.app/v1/5e9daed367d1454fab7c75f0ec8aceff`, //Add here! ), }, walletConnectProjectId: process.env.NEXT_PUBLIC_WALLETCONNECT_PROJECT_ID as string, appName: "Get started with Nodie powered by POKT", appDescription: "Walllet Connect with Nodie", appUrl: "https://family.co", appIcon: "https://family.co/logo.png", }), ); ``` From _Web3Provider.tsx_ example add your preffered networks like polygon, avalance etc and inside transport config pass in nodies HTTP link. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yt8mgyqswxvwys4ytyot.png) The site you can reach here on the left side you just copy endpoint link and paste it. The right side is when you log in with Github can have added features like logs, statisitcs add API keys and even Websockets comming soon! 😉 > Note: Be sure to add your .env file and WalletConnect Project ID follow this link to create an account and to generate projectId key. When all set and done with the short guide referenced above you should see something like this when connected. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xqf3t9kjjqi7apuabcb2.png) If you encountered issues with NextJS(I sure had) bellow I have attached dead simple repo including everything above(excluding .env duhhh) https://github.com/Mozes721/WalletConnect ## Honorable Mentions If you want something even more flexible and additional features like utilizing your backend to your Dapp you can go up a notch using https://docs.grove.city/guides/getting-started/welcome-to-grove As a Go, Rust enthusiast it's tough to ignore the added features at Groove as well powered by POKT but have utilize added features when needed. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b97yxiszjw2twxfg7dl4.jpg) --- There are several free RPC endpoints available, which vary in accessibility, user experience (UX), and performance.  The setup was quite simple and intuitive, making it relatively easy to build on top of them. This is beneficial for web3, DePIN infrastructure, or any projects that involve working with blockchain data and creating API endpoints. If you encounter issues or have any questions feel free to contact me dirrectly or ask in discord group here. For more on POKT Network you can check out bellow link. 👇 https://pokt.network/blog
mozes721
1,875,378
Abstracting State Management in React with Typescript, and its Benefits
Bizarre illustration by Google Gemini AI. State management is something unavoidable on the UI, be it...
0
2024-06-03T13:10:36
https://dev.to/bwca/abstracting-state-management-in-react-with-typescript-and-its-benefits-8mk
react, typescript, redux, zustand
_Bizarre illustration by Google Gemini AI._ State management is something unavoidable on the UI, be it local component scoped, or global, which in some enterprise applications can grow to gargantuan sizes. The topic I would like to dwell upon today is tight coupling, which can be created by a state management library, how state management can be abstracted and what benefits and drawbacks it has. Perhaps it should be noted that for the applications that are not maintained in the long run or are short-lived, it does not really matter which state management tool is used or if there is an abstraction at all. Yet, those should be obvious. As a part of this article we will take a look at `Redux` in a `React` application, the coupling it creates, how it can be abstracted and tested and how the state management tool can be swapped once the abstraction has been created, so the public api of the state management remains the same for the app, but the actual provider of it changes. Firstly let us create a `React` application and add `Redux`: ```bash npx create-react-app demo_react-abstracting-state-management --template typescript && install @reduxjs/toolkit react-redux ``` Following the [quick start](https://react-redux.js.org/tutorials/typescript-quick-start) we can add a counter state, a component to render it, and get something similar to what is found in [this commit](https://github.com/Bwca/demo_react-abstracting-state-management/commit/dc8bf22b6a947b0dc74edc3e0a6be9fe1bf0f703). Our counter slice is the following, it is almost identical to the one from the `Redux` quickstart: ```typescript import { createSlice, PayloadAction } from '@reduxjs/toolkit'; import type { RootState } from '../../store'; // Define a type for the slice state interface CounterState { value: number; } // Define the initial state using that type const initialState: CounterState = { value: 0, }; export const counterSlice = createSlice({ name: 'counter', // `createSlice` will infer the state type from the `initialState` argument initialState, reducers: { increment: (state) => { state.value += 1; }, decrement: (state) => { state.value -= 1; }, // Use the PayloadAction type to declare the contents of `action.payload` incrementByAmount: (state, action: PayloadAction<number>) => { state.value += action.payload; }, }, }); export const { increment, decrement, incrementByAmount } = counterSlice.actions; ``` The noticeable difference is that we are placing `Redux`-related items in `store/redux`, because we intend to have abstraction in `store` and `store/redux` will be one of the implementations. The end goal is to provide an abstract exports from `store`, in a way that consumers are no longer coupled with a specific state management library. Apart from a significant amount of boilerplate, which is not critical, what problem does it introduce to the application? The `Counter` component is now coupled with `Redux`, and so is our whole application, since it is wrapped in the `Redux` provider. While this is not a problem if your tech stack is married to `Redux`, this somewhat is limiting and coupling: changes to `Redux` in the long run could cause introducing changes to `Counter` component or other consumers, since all components utilizing imports from `Redux` will be coupled with it. See the `Counter` component: ```typescript import { decrement, increment, useAppDispatch, useAppSelector } from '../../store/redux'; export const Counter = () => { const count = useAppSelector((state) => state.counter.value); const dispatch = useAppDispatch(); return ( <div> <div> <button aria-label="Increment value" onClick={() => dispatch(increment())} > Increment </button> <span>{count}</span> <button aria-label="Decrement value" onClick={() => dispatch(decrement())} > Decrement </button> </div> </div> ); }; ``` Were we to migrate to some other state management tool, the scope of change would be large as well, since `Redux` would have roots all over the place. You could consider it a soft form of vendor lock, when you could migrate, but the effort is not worth it, it is beneficial for libraries to impose such bounds, but not necessarily beneficial for your application, especially if a better alternative appears on the horizon. Looking at the exposed state, we can deduct some abstractions: a model for state, a model for hook to access state and methods to modify it and a model for the state provider, which is a wrapper for our application. Observe the [following commit](https://github.com/Bwca/demo_react-abstracting-state-management/commit/bf69400fcfe6bd7998c8d9a3d57bc14fba16f4b4). So the state and managing methods can be expressed with the following interface: ```typescript // src/store/models/counter.state.ts export interface CounterState { count: number; increment: () => void; decrement: () => void; } ``` The hook delivering them would have the following functional interface: ```typescript // src/store/models/counter-state-hook.model.ts import { CounterState } from './counter.state'; export interface CounterStateHook { (): CounterState; } ``` And, finally, the provider: ```typescript // src/store/models/state-provider.model.ts import { ReactElement, ReactNode } from 'react'; export interface StateProvider { (args: { children?: ReactNode }): ReactElement; } ``` Evidently, the `Redux` items, currently present in our application hardly conform to these abstractions, and we would need to create an adapter to make things compliant, so let us create one, which will encapsulate `Redux` state management: ```typescript // src/store/redux/features/counter/counter-hook.ts import { useCallback } from 'react'; import { useAppDispatch, useAppSelector } from '../../hooks'; import { decrement, increment } from './counter-slice'; import { CounterStateHook } from '../../../models'; export const useCounterHook: CounterStateHook = () => { const count = useAppSelector((state) => state.counter.value); const dispatch = useAppDispatch(); const inc = useCallback(() => { dispatch(increment()); }, [dispatch]); const dec = useCallback(() => { dispatch(decrement()); }, [dispatch]); return { count, decrement: dec, increment: inc, }; }; ``` Also, the `Redux` provider now needs to be adapted to conform to the provider interface: ```typescript // src/store/redux/StateProvider.tsx import { Provider } from 'react-redux'; import { store } from './store'; import { StateProvider } from '../models'; export const StateContextProvider: StateProvider = ({ children }) => { return <Provider store={store}>{children}</Provider>; }; ``` With these in place, we can now provide abstract exports from `store` which conceal the actual implementation of the underlying state management library. Here goes our new state management provider: ```typescript // src/store/state.provider.ts import { StateProvider as Provider } from './models'; import { StateContextProvider as ReduxStateContextProvider } from './redux'; export const StateProvider: Provider = ReduxStateContextProvider; ``` And the hook: ```typescript // src/store/counter.hook.ts import { CounterStateHook } from './models'; import { useCounterHook as ReduxUseCounterHook } from './redux'; export const useCounter: CounterStateHook = ReduxUseCounterHook; ``` Which in turn means we can use them in `Counter`: ```typescript import { useCounter } from '../../store'; export const Counter = () => { const { increment, decrement, count } = useCounter(); return ( <div> <div> <button aria-label="Increment value" onClick={increment}> Increment </button> <span>{count}</span> <button aria-label="Decrement value" onClick={decrement}> Decrement </button> </div> </div> ); }; ``` The `Counter` no longer has any idea which library manages the state, or even if it s a library at all, which is good. Now, provider for the app, and a small cleanup for imports: ```typescript // src/index.tsx import { StrictMode } from 'react'; import { createRoot } from 'react-dom/client'; import reportWebVitals from './reportWebVitals'; import { App } from './App'; import { StateProvider } from './store'; const root = createRoot(document.getElementById('root') as HTMLElement); root.render( <StrictMode> <StateProvider> <App /> </StateProvider> </StrictMode> ); // If you want to start measuring performance in your app, pass a function // to log results (for example: reportWebVitals(console.log)) // or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals reportWebVitals(); ``` Some state provider, no details, also good. Now, before we proceed to migrate to a different state management tool, what we could do is write tests covering the public api of our `store` module, namely the `useCounter` hook, which we can do using `React Testing Library`: ```typescript import { act } from 'react'; import { renderHook } from '@testing-library/react'; import { useCounter } from './counter.hook'; import { StateProvider } from './state.provider'; describe('Tests for the counter state', () => { test('should increment counter', () => { const { result } = renderHook(useCounter, { wrapper: StateProvider }); act(() => { result.current.increment(); }); expect(result.current.count).toBe(1); }); test('should decrement counter', () => { const { result } = renderHook(useCounter, { wrapper: StateProvider }); const current = result.current.count; act(() => { result.current.decrement(); }); expect(result.current.count).toBe(current - 1); }); }); ``` Once we swap the implementation, the test will show us if everything is working, and we do not need to re-visit consumers. As a poof of concept for state management tool migration, I chose `Zustand`, because all the kool kids write articles about it and I had never used it before. Apparently it is pronounced `/ˈʦuːʃtant/` and the name itself is taken from German, but don't take my word for it. ```bash npm i zustand ``` So we create a folder `zustand` next to the `redux`, it is going to be another implementation of our abstraction. It has no providers, so we would have to implement and empty one to comply with our abstraction (observe the [commit](https://github.com/Bwca/demo_react-abstracting-state-management/commit/1129c0070e20d89f9748ce32986e58a8d9177814)): ```typescript // src/store/zustand/StateProvider.tsx import { StateProvider } from '../models'; export const StateContextProvider: StateProvider = ({ children }) => { return <>{children}</>; }; ``` However, its store creating function actually accepts and interface, and we can pass the existing one we created earlier, `CounterState`, which means no extra adapters, it will just work (which is amazingly convenient): ```typescript // src/store/zustand/counter.hook.ts import { create } from 'zustand'; import { CounterState } from '../models'; export const useCounter = create<CounterState>((set) => ({ count: 0, increment: () => set((state) => ({ count: state.count + 1 })), decrement: () => set((state) => ({ count: state.count - 1 })), })); ``` All what is left is to swap implementations for the hook: ```typescript // src/store/counter.hook.ts import { CounterStateHook } from './models'; import { useCounterHook as ReduxUseCounterHook } from './redux'; import { useCounter as ZustandUseCounterHook } from './zustand'; export const useCounter: CounterStateHook = ZustandUseCounterHook; // || ReduxUseCounterHook; ``` And the state provider: ```typescript // src/store/state.provider.ts import { StateProvider as Provider } from './models'; import { StateContextProvider as ReduxStateContextProvider } from './redux'; import { StateContextProvider as ZustandStateProvider } from './zustand'; export const StateProvider: Provider = ZustandStateProvider; // || ReduxStateContextProvider; ``` We can run the tests now and make sure switching the underlying library, which does the heavy-lifting for state management had no impact on the `store` module's public api, which means the consuming components need no changes. So to sum it up, what are the benefits? Looser coupling for components and state management, more flexibility, easier migration when changing the state management library. What are the drawbacks? Some abstraction overhead and need to write some interfaces to define public api and potentially adapters to make state management library comply. That's it, have fun, the repo is [here](https://github.com/Bwca/demo_react-abstracting-state-management) :)
bwca
1,875,377
Plan First, Code Later
As a software engineer, getting caught up in the excitement of writing code and building new features...
0
2024-06-03T13:09:59
https://dev.to/avwerosuoghene/plan-first-code-later-7fo
webdev, productivity, softwaredevelopment, devjournal
As a software engineer, getting caught up in the excitement of writing code and building new features is pretty easy. However, successful developers understand the importance of taking the time to plan your project thoroughly. Here’s why you should always plan first and code later. **Define Clear Objectives** Setting clear objectives provides direction and purpose for your project. They help ensure everyone on the team is aligned and working towards the same goals. Imagine you're tasked with developing a new e-commerce platform. Without clear objectives, you might focus on adding flashy features like animations and complex user interfaces. However, if your primary purpose is to increase sales, your efforts would be better spent optimizing the checkout process and improving load times. **Create a Blueprint** A well-thought-out blueprint serves as a roadmap for your project. It helps you visualize the structure and flow of your application, making it easier to identify potential issues before they arise. **Break Down Tasks** Breaking down the project into smaller, manageable tasks makes it easier to track progress and stay organized. The task of handling a large-scale project can feel overwhelming unless you effectively break it down into manageable segments. It also helps you identify dependencies and prioritize work more effectively. For the e-commerce platform mentioned earlier, you can break down tasks into components such as user authentication, product listing, shopping cart functionality, payment processing, and order management. This way, you can tackle each component individually, ensuring thorough testing and integration. **Choose the Right Tools** The tools and technologies you choose can significantly impact your project's success. Selecting the right tools ensures your team can work efficiently and leverage their expertise effectively. If your e-commerce app requires real-time updates, such as inventory changes and live customer support, choosing a technology like Node.js with WebSocket support would be more appropriate than a traditional request-response model. Additionally, you can never go wrong using version control tools like Git to ensure collaborative and organized code management. **Consider Scalability** Planning for scalability ensures that your application can handle increased load and growth over time. This helps avoid costly refactoring and performance issues in the future. **Plan for Testing** Incorporating testing from the beginning ensures code quality and reliability. It helps catch bugs early, reducing the cost and effort required to fix them later. For the e-commerce app, create unit tests for individual functions like user registration and payment processing. Implement integration tests to ensure different components work together seamlessly, such as adding items to the cart and completing a purchase. **Conclusion** By investing time in thorough planning, you set a strong foundation for your project. This approach leads to cleaner, more efficient code and a smoother development process. Remember, the key to successful software development lies in thinking first and coding later.
avwerosuoghene
1,875,376
How to learn from online SQL Certification Courses in 2024?
SQL or Structured Query Language is the establishment of the data business. In the event that you are...
0
2024-06-03T13:09:04
https://dev.to/pawan_saxena_b955f317d3d7/how-to-learn-from-online-sql-certification-courses-in-2024-2oig
sql
SQL or Structured Query Language is the establishment of the data business. In the event that you are excited about building employment in a data-driven profile, be it data inspector, data analyst, business examination, informational collection engineer, and the summary is ceaseless. As an essential snippet of database organization and also as dire for certain undertakings, SQL is an undeniable prerequisite for any calling or assignment you want to endeavor with data. SQL figures out how to change, access, and control the data in its most fundamental construction. This article will research how you can transform into a talented SQL planner in a SQL engineer instructional class for [sql certifications](https://www.janbasktraining.com/online-sql-server-training) **What is SQL?** Organized Query Language is a standard language with application to social informational indexes. SQL is used to install, eradicate, change, and search informational collection records. With SQL, you can do various tasks, including improvement and upkeep of the informational collection. **How To Become A SQL Developer?** SQL or Structured Query Language is the establishment of the data business. In the event that you are enthused about building a livelihood in a data-driven profile, be it data master, data specialist, business examination, informational collection creator, and the summary is relentless. As an essential snippet of database organization and correspondingly as fundamental for certain endeavors, SQL is an undeniable necessity for any calling or adventure you really want to endeavor with data. SQL figures out how to change, access, and control the data in its most fundamental construction. This article will research how you can transform into a talented SQL engineer. **What is SQL?** Coordinated Query Language is a standard language with application to social informational collections. SQL is used to insert, eradicate, change, and search informational index records. With [SQL](https://en.wikipedia.org/wiki/SQL), you can do various activities, including headway and upkeep of the database in a SQL designer instructional class online sql training. **SQL Developer Job Requirements** The work necessities vary starting with one supervisor then onto the next. The up-and-comers enter the field with different levels of formal preparation. Could we see some crucial necessities that will give you an edge: A four-year certification in Computer science or an associated field. This is reliably a lean toward a decision. Several significant stretches of contribution as a SQL designer or relative positions. This could be huge in the event that you are following senior-level positions. A wonderful cognizance of SQL programming and informational indexes is fundamental for essentially every SQL engineer position. Incredible conclusive thinking and decisive reasoning capacities. **How Does a SQL Developer Respond?** While unequivocal commitments may change, a SQL architect makes and keeps a database to suit the business needs. They are familiar with a wide extent of informational collection programming, including Oracle and Microsoft things. A few tasks and commitments that a SQL originator performs are: A SQL originator is obligated for informational index systems plans, which are used for taking care of and getting to business-related information. They are liable for making, invigorating, and deleting data depending on the situation by a particular application. A SQL engineer makes informed decisions with respect to reasonable informational index lingos and advancements. They work on evaluating the association's establishment, running various demonstrative tests, and invigorating the information security systems for ideal execution and useful courses. They moreover record code, give progress reports, and perform code overview and buddy input. They are furthermore at risk for testing code for bugs and completing fixes. Capacities Required to Become a SQL Developer Here are some basic capacities that a SQL Developer ought to be productive in this profile: **1. Database** They use the database as a layered design for making organizations by secluding business reasoning and interfaces. The informational index deals with essentials. It is used to design real and real models of social informational indexes. **2.SQL** SQL licenses you to control and access informational indexes. It will engage you to make complex requests using temporary tables and table elements. They plan the SQL to pass on powerful enumerating courses of action using MS SQL server declaring organizations. **3.T- SQL** T-SQL is short for Transact-SQL, a development to the SQL language that gives significantly more expected attestations. It is used to channel data legacy structures using complex T-SQL clarifications. **4.SSIS** Microsoft SSIS (SQL Server Integration Services) is an ETL mechanical assembly from Microsoft. You can fuse data from novel sources and save them in a chronicle and decontaminate data. It is locked in with making and executing SSIS application deals with any consequences regarding diverse business units across the affiliation. **5. Astute Skills** Specialists ought to have bewildering logical capacities to fathom the clients' necessities and plan the item as shown by their prerequisites. **6.Pay presumptions** In the United States, SQL originators draw an ordinary pay of $71,486 every year after a SQL engineer instructional class... In India, the typical pay of a SQL originator is Rs.440,176. **Why Pursue Becoming a SQL Developer?** SQL is a huge advancement. The volume of data will increase in the information age that we live in. This will achieve splendid positions, pay, and expert accomplishment for looming SQL originators. This isn't just limited to Computer Science anyway in various endeavors like cash, retail, and clinical benefits region, to give a few models. All of the affiliations will require a person who can manage their database. The SQL planner's work will believe strengthening changes to be the affiliations that understand data-driven innovations. The movements in SQL advances will demand additionally gifted and thought SQL engineers - thus setting out fundamentally more extraordinary entryways and remunerations for capable planners for online sql training.
pawan_saxena_b955f317d3d7
1,875,341
How to Prevent SSH Timeout on Linux Systems
If you've ever experienced your SSH connection freezing due to inactivity, you're not alone. This...
0
2024-06-03T12:32:10
https://geeksta.net/geeklog/how-to-prevent-ssh-timeout-on-linux-systems/
linux, sysadmin, ssh, howto
--- title: How to Prevent SSH Timeout on Linux Systems published: true date: 2024-06-03 13:08:49 UTC tags: linux, sysadmin, ssh, howto canonical_url: https://geeksta.net/geeklog/how-to-prevent-ssh-timeout-on-linux-systems/ --- If you've ever experienced your SSH connection freezing due to inactivity, you're not alone. This common issue can be particularly frustrating, but fortunately, there's a simple solution. By adjusting your SSH client configuration, you can keep the connection alive. Follow these steps to prevent SSH timeout on your Linux machine. ## Edit the SSH Configuration First, you'll need to edit the SSH configuration file. If it doesn't already exist, create it. Start a terminal and open the file with your preferred editor, for example: ``` vim ~/.ssh/config ``` Next, add the following lines to ensure your SSH connection remains active: ``` Host * ServerAliveInterval 60 ServerAliveCountMax 3 ``` Save the file and exit the text editor. For the changes to take effect, you need to restart your SSH connection. ## Explanation Here's what these settings do: - `ServerAliveInterval 60`: This sends a keep-alive packet to the server every 60 seconds. - `ServerAliveCountMax 3`: If the server does not respond after three keep-alive packets, the connection will be terminated. That's it! Your SSH connection should now remain active even during periods of inactivity. Adjust the `ServerAliveInterval` value as needed to fit your specific requirements. ## Conclusion By following this short and simple guide, you can prevent your SSH connections from freezing due to inactivity. This small configuration change can save you from the hassle of repeatedly reconnecting to your remote servers. --- Thank you for reading! This article was written by Ramiro Gómez using open source software and the assistance of AI tools. While I strive to ensure accurate information, please verify any details independently before taking action. For more articles, visit the [Geeklog on geeksta.net](https://geeksta.net/geeklog/).
geeksta
1,875,375
Cleansing Delight: Unraveling Hotel Soap Offerings
Intro: Hotels are the supreme location for convenience as well as leisure. Every tourist constantly...
0
2024-06-03T13:08:25
https://dev.to/jonathan_hulseh_7a93b8f43/cleansing-delight-unraveling-hotel-soap-offerings-59bn
Intro: Hotels are the supreme location for convenience as well as leisure. Every tourist constantly searches for a location that will certainly offer first-class convenience as well as high-end. The facilities offered through hotels like smooth bed linen, space solutions, as well as toiletries are typically the crowning achievement. Amongst resort toiletries, one of the absolute most typical is soap, which is available in various sizes and shapes. We'll check out the benefits, development, security, as well as quality of Cleansing Delight soap offerings. Benefits of Cleansing Delight Soap: Using Cleansing Delight Soap includes various benefits. First of all, Cleansing Delight Soap is mild on the skin layer, making it appropriate for individuals with delicate skin layers. Second of all, it offers a rejuvenating fragrance that remains for hrs. Third, Cleansing Delight Soap is available in a practical dimension that enables tourists to bring all of them in their bags with no inconvenience. Furthermore, disposable hotel soap is environmentally friendly, making it a remarkable option for individuals who appreciate the atmosphere. Development of Cleansing Delight Soap: The development of Cleansing Delight Soap has made it stand apart from various other soap offerings on the market. Along with the increase in innovation, the production procedure of Cleansing Delight Soap was enhanced to guarantee the manufacturing of top-quality soap. The hotel soap is created utilizing all-organic components, which guarantees that the soap is mild on the skin layer. Additionally, Cleansing Delight Soap is available in various styles, which contributes to its aesthetical charm. Finally, the soap is packaged in a distinct manner which makes it attractive to tourists. Security of Cleansing Delight Soap: The security of Cleansing Delight Soap can't be overemphasized. The soap is evaluated, guaranteeing that it is risk-free towards utilization on the skin layer. The soap has likewise gone through several quality examinations, which guarantee that it is sanitary as well as risk-free for tourists. Additionally, Cleansing Delight Soap is actually without hazardous chemicals like sulfates, which guarantees that it doesn't trigger any type of allergies. Ways to Utilize Cleansing Delight Soap: Utilizing Cleansing Delight Soap is an easy procedure that needs a little bit of initiative. First of all, tourists have to guarantee that their palms are damp. Second of all, they have to lather the soap in their palms up till an abundant foam is created. Third, they have to spread out the foam over their body system as well as deal with it, guaranteeing that locations are dealt with. After that, they had to wash on their own along with sprinkling up till the soap was cleaned off. Lastly, tourists can easily completely dry out their skin layer utilizing a towel, which leaves behind their skin layer sensation smooth as well as revitalized. Solution as well as Quality of Cleansing Delight Soap: The solution as well as the quality of Cleansing Delight Soap are remarkable. The soap is quickly available in every resort space, which guarantees that tourists can easily preserve great health throughout their remain. The hotel soap bar is likewise changed every day, guaranteeing that tourists have accessibility to cleanse soap daily. Additionally, the quality of Cleansing Delight Soap is constantly constant, which guarantees that tourists will certainly constantly have a favorable expertise while utilizing the soap. Application of Cleansing Delight Soap: Cleansing Delight Soap is appropriate for utilization in different setups. The soap could be utilized by tourists in their resort spaces, guaranteeing that they preserve great health throughout their remain. The soap can easily likewise be utilized for individual utilize in your home, which assurances that individuals expertise the exact very same quality as skilled in hotels. Finally, Cleansing Delight Soap could be utilized as a present for family and friends, offering all of them a distinct as well as elegant expertise. Source: https://www.kailai-amenity.com/application/hotel-soap
jonathan_hulseh_7a93b8f43
1,875,374
Understanding LoRA - Low-Rank Adaptation for Efficient Machine Learning
In the evolving landscape of machine learning, the quest for more efficient training methods is...
0
2024-06-03T13:07:57
https://victorleungtw.com/2024/06/03/lora/
machinelearning, efficiency, lora, decomposition
In the evolving landscape of machine learning, the quest for more efficient training methods is constant. One such innovation that has gained attention is Low-Rank Adaptation (LoRA). This technique introduces a clever way to optimize the training process by decomposing the model's weight matrices into smaller, more manageable components. In this post, we'll delve into the workings of LoRA, its benefits, and its potential applications. ![](https://victorleungtw.com/static/07926666b0f8d09c8ce870f4e056a10c/a9a89/2024-06-03.webp) #### What is LoRA? Low-Rank Adaptation, or LoRA, is a technique designed to enhance the efficiency of training large machine learning models. Traditional training methods involve updating the entire weight matrix of a model, which can be computationally intensive and time-consuming. LoRA offers a solution by decomposing these weight matrices into two smaller, lower-rank matrices. Instead of training the full weight matrix, LoRA trains these smaller matrices, reducing the computational load and speeding up the training process. #### How Does LoRA Work? To understand LoRA, let's break down its process into simpler steps: 1. **Decomposition of Weight Matrices**: - In a neural network, weights are typically represented by large matrices. LoRA decomposes these weight matrices into the product of two smaller matrices: \( W \approx A \times B \), where \( W \) is the original weight matrix, and \( A \) and \( B \) are the decomposed low-rank matrices. 2. **Training the Low-Rank Matrices**: - Instead of updating the full weight matrix \( W \) during training, LoRA updates the smaller matrices \( A \) and \( B \). Since these matrices are of lower rank, they have significantly fewer parameters than \( W \), making the training process more efficient. 3. **Reconstructing the Weight Matrix**: - After training, the original weight matrix \( W \) can be approximated by multiplying the trained low-rank matrices \( A \) and \( B \). This approximation is often sufficient for the model to perform well, while requiring less computational power. #### Benefits of LoRA LoRA offers several advantages that make it an attractive option for machine learning practitioners: 1. **Computational Efficiency**: - By reducing the number of parameters that need to be updated during training, LoRA significantly cuts down on computational resources and training time. 2. **Memory Savings**: - The smaller low-rank matrices consume less memory, which is particularly beneficial when training large models on hardware with limited memory capacity. 3. **Scalability**: - LoRA makes it feasible to train larger models or to train existing models on larger datasets, thereby improving their performance and generalization. 4. **Flexibility**: - The decomposition approach of LoRA can be applied to various types of neural networks, including convolutional and recurrent neural networks, making it a versatile tool in the machine learning toolkit. #### Potential Applications of LoRA LoRA's efficiency and flexibility open up a range of applications across different domains: 1. **Natural Language Processing (NLP)**: - Large language models, such as BERT and GPT, can benefit from LoRA by reducing training time and computational costs, enabling more frequent updates and fine-tuning. 2. **Computer Vision**: - In tasks like image classification and object detection, LoRA can help train deeper and more complex models without the prohibitive computational expense. 3. **Recommendation Systems**: - LoRA can improve the training efficiency of recommendation algorithms, allowing for faster adaptation to changing user preferences and behaviors. 4. **Scientific Research**: - Researchers working on large-scale simulations and data analysis can leverage LoRA to accelerate their experiments and iterate more quickly. #### Conclusion LoRA represents a significant step forward in the pursuit of efficient machine learning. By decomposing weight matrices into smaller components, it reduces the computational and memory demands of training large models, making advanced machine learning techniques more accessible and practical. As the field continues to evolve, innovations like LoRA will play a crucial role in pushing the boundaries of what's possible with machine learning. Whether you're working in NLP, computer vision, or any other domain, LoRA offers a powerful tool to enhance your model training process.
victorleungtw
1,875,372
PACX ⁓ Command Line Utility Belt for Power Platform / Dataverse
When Power Platform CLI was introduced, most of the Dataverse community thought: "Command line?!...
0
2024-06-03T13:06:34
https://dev.to/_neronotte/pacx-command-line-utility-belt-for-power-platform-dataverse-e4e
github, powerplatform, pacx, opensource
When Power Platform CLI was introduced, most of the Dataverse community thought: > "Command line?! Why? We have such amazing XrmToolbox tools!" Then the DevOps tide 🌊 rise, and everyone started thinking > "how can i do *this* scripting it?" "how can I automate *this other stuff*?". [Power Platform CLI](https://learn.microsoft.com/en-us/power-platform/developer/cli/introduction?tabs=windows) is really powerful, and really helps on scripting management operations vs Dataverse environments... but in the day-by-day usage, me and a couple of friends of mine started struggling on the _lack of specific commands that we thought to be useful_ for us. Just like when you buy your first barbecue. When you don't have it, it is fine, you just use the oven. Then you buy it, and you are really happy, satisfied and looking forward to start cooking something. You start using it, and after a while you start thinking: "oh, it would be really useful if it had a smoker 🤔"; "oh, it's a shame it has no steamer for vegetables 😕". I was feeling the same about PAC CLI... that's why, with a couple of friends, we've created 🎉**PACX**🎉! **PACX** is a free to use, **open source**, command line based, utility belt for Dataverse. It's aim is to extend the capabilities of PAC CLI providing a lot of commands **designed by Power Platform Developers** to: - help with the automation of repetitive tasks 🤖 - provide an easy access to hidden gems that are provided by the platform only via API 🫣 - make development and deployment faster and more efficient 🚀 It's also a lot more than that. Built with an **XrmToolbox** like plugin-based approach in mind, it is also an easy-to-use platform to develop your own tools and extensions. Take a look at the [official GitHub Repository](https://github.com/neronotte/Greg.Xrm.Command) containing instructions on how to install the tool, and **drop a star if you like it**! {% youtube r16vbSdeFLk %}
_neronotte
1,875,371
Hire angular js Developers
Hire our in-house team of the best Angular developers who can build powerful front-end deployments as...
0
2024-06-03T13:05:28
https://dev.to/aayushi_parmar_aed8f8c592/hire-angular-js-developers-256f
hireangularjsdevelopers, hireangularjsdeveloperinusa, angularjsdevelopers
Hire our in-house team of the best [Angular developers](https://www.qsstechnosoft.com/hire-angularjs-developers/) who can build powerful front-end deployments as per your business needs. Each one of our dedicated AngularJS programmers has an average experience of 3+ years with proven expertise. QSS Technosoft is a pioneer in offering offshoring services to businesses across the globe. Our world-class Angular developers have a proven track record of successfully delivering robust Angular solutions tailored to your specific business requirements. 1.Strong Technical Proficiency 2.3+ Years Of Average Experience 3.Timely Project Delivery 4.Competitive Pricing 5.Easy Onboarding  6.Agile Development Methodology   7.Strict NDA Policies 8.Easy Onboarding 
aayushi_parmar_aed8f8c592
1,875,370
Designing User-Friendly Interfaces for Adult Websites
In the realm of adult website development, creating a user-friendly interface is paramount. A...
0
2024-06-03T13:00:10
https://dev.to/scarlettevans09/designing-user-friendly-interfaces-for-adult-websites-5a23
In the realm of adult website development, creating a user-friendly interface is paramount. A well-designed interface not only enhances the user experience but also ensures that visitors stay longer and engage more with your content. This blog will explore key strategies for designing user-friendly interfaces specifically for adult websites, focusing on usability, accessibility, and aesthetics. By the end, you'll have a comprehensive understanding of how to build an adult website that is both functional and appealing. ## Understanding the Audience Before diving into design specifics, it’s crucial to understand your target audience. The demographics, preferences, and behaviors of users visiting adult websites can vary widely. Conduct thorough market research to gather insights into what your audience values most in terms of navigation, content, and overall experience. ## Key Elements of User-Friendly Design ### 1. Intuitive Navigation: **i. Simple Menu Structure:** Keep the main menu straightforward, with clearly labeled categories. Avoid overloading the menu with too many options. **ii. Breadcrumbs:** Implement breadcrumbs to help users keep track of their location within the site, allowing them to easily navigate back to previous pages. **iii. Search Functionality:** A powerful search bar is essential, enabling users to find specific content quickly. Ensure the search feature is prominently placed and easy to use. ### 2. Responsive Design: **i. Mobile Optimization:** With a significant portion of users accessing adult websites via mobile devices, responsive design is non-negotiable. Ensure your site looks and functions well on all screen sizes. **ii. Fast Loading Times:** Optimize images and other media to reduce load times. A slow website can frustrate users and lead to higher bounce rates. ### 3. Content Organization: **i. Clear Categories and Tags:** Organize content into well-defined categories and use tags to make it easier for users to find related content. **ii. Featured Content:** Highlight popular or new content to draw attention and keep users engaged. ### 4. Aesthetic Appeal: **i. Clean Design:** Avoid clutter and focus on a clean, professional design. Use whitespace effectively to make the content stand out. **ii. Consistent Theme:** Maintain a consistent color scheme and typography throughout the site to create a cohesive look and feel. ### 5. Accessibility: **i. Keyboard Navigation:** Ensure that all functionalities of your website can be accessed via keyboard for users with disabilities. **ii. Alt Text for Images:** Use descriptive alt text for images to assist visually impaired users and improve SEO. **iii. Readable Fonts:** Choose fonts that are easy to read and avoid overly small text sizes. ## Enhancing User Engagement ### 1. Interactive Features: **i. Comments and Reviews:** Allow users to leave comments and reviews on content. This not only engages users but also provides valuable feedback. **ii. Ratings System:** Implement a ratings system for content. This helps users identify popular content quickly and encourages interaction. ### 2. Personalization: **i. User Profiles:** Allow users to create profiles where they can save favorite content and receive personalized recommendations. **ii. Recommendations:** Use algorithms to suggest content based on users’ viewing history and preferences. ### 3. Security and Privacy: SSL Encryption: Ensure your site uses SSL encryption to protect users’ data. Privacy Policy: Clearly outline your privacy policy to reassure users that their data is safe. Secure Payment Systems: If your site includes paid content, use secure payment gateways to protect users' financial information. ## Testing and Feedback ### 1. Usability Testing: **i. Beta Testing:** Before launching, conduct beta testing with a small group of users to identify any issues and gather feedback. **ii. A/B Testing:** Use A/B testing to compare different design elements and determine what works best for your audience. ### 2. Continuous Improvement: **User Feedback:** Regularly solicit feedback from users to understand their needs and preferences. Use this feedback to make ongoing improvements. **ii. Analytics:** Utilize analytics tools to track user behavior on your site. This data can provide insights into areas that need enhancement. ## Conclusion Designing user-friendly interfaces for adult websites involves a careful balance of aesthetics, functionality, and user experience. By focusing on intuitive navigation, responsive design, clear content organization, and accessibility, you can create a site that not only attracts visitors but also keeps them engaged. Additionally, incorporating interactive features, personalization, and robust security measures will further enhance user satisfaction. Remember, the key to successful adult website development lies in understanding your audience and continuously refining your design based on user feedback and analytics. With these strategies in place, you can build an adult website that stands out in a competitive market and provides a seamless, enjoyable experience for all users.
scarlettevans09
1,876,738
Working with Gitlab on the CLI
Glab is an open-source tool that allows you to work with GitLab from the command line, eliminating...
0
2024-06-06T10:02:34
https://www.zufallsheld.de/2024/06/03/working-with-the-gitlab-cli/
gitlab, cli, glab
--- title: Working with Gitlab on the CLI published: true date: 2024-06-03 13:00:00 UTC tags: gitlab,gitlab,cli,glab canonical_url: https://www.zufallsheld.de/2024/06/03/working-with-the-gitlab-cli/ --- [Glab](https://gitlab.com/gitlab-org/cli#glab) is an open-source tool that allows you to work with GitLab from the command line, eliminating the need to switch to a browser to create or approve merge requests, start a pipeline run, or view issues. Glab can work with repositories hosted on gitlab.com as well as with your own GitLab instances. The tool automatically detects which instance it should work with. The CLI tool was started by Clement Sam and has been an official GitLab product since 2022. ## Setup Glab can be installed in various ways. Since it is written in Golang, the executable can be easily downloaded and run from the [releases page](https://gitlab.com/gitlab-org/cli/-/releases). Alternatively, Glab is also available in various package repositories. It runs on Linux, Windows, and macOS. All installation options can be found [here](https://gitlab.com/gitlab-org/cli/-/blob/main/docs/installation_options.md)! ### Registering with the GitLab Instance Before working with repositories, you need to authenticate with the GitLab instance. For this, you need a Personal Access Token, which you can create in your GitLab profile. For the gitlab.com instance, you can create it [here](https://gitlab.com/-/profile/personal_access_tokens). Assign a name to the token and select the “api” and “write\_repository” permissions. The generated token will be needed in the next step. Now, log in to the GitLab instance using the token by running `glab auth login` and answering the prompts. ``` > glab auth login ? What GitLab instance do you want to log into? gitlab.com - Logging into gitlab.com ? How would you like to login? Token Tip: you can generate a Personal Access Token here https://gitlab.com/-/profile/personal_access_tokens The minimum required scopes are 'api' and 'write_repository'. ? Paste your authentication token: **************************? Choose default git protocol HTTPS ? Authenticate Git with your GitLab credentials? Yes - glab config set -h gitlab.com git_protocol https ✓ Configured git protocol - glab config set -h gitlab.com api_protocol https ✓ Configured API protocol ✓ Logged in as rndmh3ro ``` You can verify a successful login with `glab auth status`. ``` > glab auth status gitlab.com ✓ Logged in to gitlab.com as rndmh3ro (/home/segu/.config/glab-cli/config.yml) ✓ Git operations for gitlab.com configured to use https protocol. ✓ API calls for gitlab.com are made over https protocol ✓ REST API Endpoint: https://gitlab.com/api/v4/ ✓ GraphQL Endpoint: https://gitlab.com/api/graphql/ ✓ Token: ************************** git.example.com ✓ Logged in to git.example.com as segu (/home/segu/.config/glab-cli/config.yml) ✓ Git operations for git.example.com configured to use https protocol. ✓ API calls for git.example.com are made over https protocol ✓ REST API Endpoint: https://git.example.com/api/v4/ ✓ GraphQL Endpoint: https://git.example.com/api/graphql/ ✓ Token: ************************** ``` ## Working with Repositories Once successfully logged into the GitLab instance, you can work with repositories using glab. ### Cloning a Repository To clone repositories with `glab`, run `glab repo clone path/to/repo`, followed by an optional target directory. ``` > glab repo clone gitlab-org/cli Cloning into 'cli'... remote: Enumerating objects: 18691, done. remote: Counting objects: 100% (72/72), done. remote: Compressing objects: 100% (34/34), done. remote: Total 18691 (delta 53), reused 39 (delta 37), pack-reused 18619 Receiving objects: 100% (18691/18691), 22.98 MiB | 5.97 MiB/s, done. Resolving deltas: 100% (12391/12391), done. ``` If you have multiple repositories in a group to clone, you can do this using `glab` as well. Use the `--group` option (or `-g`) to clone all repositories in the group sequentially: ``` > GITLAB_HOST=gitlab.com glab repo clone -g gitlab-org fatal: destination path 'verify-mr-123640-security-policy-project' already exists and is not an empty directory. Cloning into 'verify-mr-123640'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done. Cloning into 'without-srp'... remote: Enumerating objects: 30, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (10/10), done. remote: Total 30 (delta 7), reused 4 (delta 4), pack-reused 16 Receiving objects: 100% (30/30), 5.94 KiB | 5.94 MiB/s, done. Resolving deltas: 100% (9/9), done. Cloning into 'container-scanning-with-sbom'... ``` ## Working with Merge Requests After checking out the repository, you can start working on issues or merge requests (MRs). A typical code change process often looks like this: You make your code changes, commit them, and push them to GitLab. Then, you want to create a merge request. Normally, you would now switch to the GitLab website to create the MR. Thanks to `glab`, you don’t need to leave the command line. Using `glab mr create`, you can interactively create an MR. You will be guided through the creation process, where you specify the title and description, and then you will be asked if you want to create the MR directly or view it in the web frontend before creating it. ``` > glab mr create ? Choose a template Open a blank merge request ? Title: New Feature ? Description <Received> ? What's next? Submit Creating merge request for test into master in gitlab-org/cli !351 New Feature (test) https://gitlab.com/gitlab-org/cli/-/merge_requests/1297 ``` You can then view it. If you want to do this in the browser, run `glab mr view` with the `--web` (or `-w`) parameter. ``` > glab mr view -w 1297 ``` You can also view the same content directly on the command line (including comments): ``` > glab mr view 1297 -R gitlab-org/cli open • opened by rndmh3ro about 1 hour ago docs: add installation options with wakemeops !1297 ## Description Add installation options with wakemeops-repository. Note: I'm not affiliated with WakeMeOps, just a happy user. ## Related Issues Resolves #1363 ## How has this been tested? not at all. ## Screenshots (if appropriate): ### Types of changes [] Bug fix (non-breaking change which fixes an issue) [] New feature (non-breaking change which adds functionality) [] Breaking change (fix or feature that would cause existing functionality to change) [✓] Documentation [] Chore (Related to CI or Packaging to platforms) [] Test gap 0 upvotes • 0 downvotes • 5 comments Labels: Community contribution, documentation, linked-issue, tw::triaged, workflow::ready for review Assignees: rndmh3ro Reviewers: aqualls Pipeline Status: success (View pipeline with `glab ci view add_wakemeops_docs`) Approvals Status: Rule "All Members" insufficient approvals (0/1 required): Rule "/docs/" sufficient approvals (0/0 required): Amy Qualls aqualls - ✓ This merge request has 1 changes View this merge request on GitLab: https://gitlab.com/gitlab-org/cli/-/merge_requests/1297 ``` If you’re not working on the codebase yourself but are reviewing MRs from others, you can list open merge requests with `glab mr list`. ``` > glab mr list Showing 22 open merge requests on gitlab-org/cli (Page 1) !1297 gitlab-org/cli!1297 docs: add installation options with wakemeops (main) ← (add_wakemeops_docs) !1296 gitlab-org/cli!1296 fix(repo view): consider current host when viewing different repositories (#1362) (main) ← (1362_repo_view) !1295 gitlab-org/cli!1295 fix: `glab mr delete` should work properly for forks (main) ← (fix_mr_delete) ``` Afterwards, you can check out the MR you want to review: ``` > glab mr checkout 1297 ``` If you’re satisfied with the content, you can add a note to the Merge Request and then approve it directly: ``` > glab mr note -R gitlab-org/cli -m "LGTM" > glab mr approve - Approving Merge Request !1297 ✓ Approved ``` And of course, you can also merge the MR right away: ``` > glab mr merge ? What merge method would you like to use? Rebase and merge ? What's next? Submit ✓ Rebase successful ! No pipeline running on test ✓ Rebased and merged !1297 New Feature (test) https://gitlab.com/gitlab-org/cli/-/merge_requests/1297 ``` At the end of the day, you can view your merged MRs by using `glab mr list` options to see only merged (`-M`) or your own (`-a @me`) MRs: ``` > glab mr list -M -a @me Showing 4 merged merge requests on gitlab-org/cli (Page 1) !1279 gitlab-org/cli!1279 feat(schedule): Add commands to create and delete schedules (main) ← (create_del_sched) !1176 gitlab-org/cli!1176 feat(schedule): Add command to run schedules (main) ← (run_schedule) !1143 gitlab-org/cli!1143 docs: remove duplicate defaults in help (main) ← (fix_help_doc) !1112 gitlab-org/cli!1112 feat(schedule): Add command to list schedules (main) ← (sched_list) ``` ## Working with Pipelines Code changes are tested through an automatic CICD pipeline. Naturally, `glab` offers the ability to work with pipelines. To start a pipeline on the main branch, use the following command: ``` > glab ci run -b main Created pipeline (id: 540823), status: created, ref: main, weburl: https://git.example.com/example/project/-/pipelines/540823 ``` You can view the status of the pipeline like this: ``` > glab ci status (failed) • 01m 11s lint test https://git.example.com/example/project/-/pipelines/540812 SHA: 275cb8295c69db166e1b1c94936d4c4b67463701 Pipeline State: failed ? Choose an action: Exit ``` If a pipeline has failed, you can view the logs using `glab ci trace`: ``` > glab ci trace Searching for latest pipeline on test... Getting jobs for pipeline 540812... ? Select pipeline job to trace: kics-scan (1209237) - failed Getting job trace... Showing logs for kics-scan job #1209237 Running with gitlab-runner 14.10.1 (f761588f) on example-shared-docker swAou6b9 Resolving secrets Preparing the "docker" executor ``` `glab` works excellently with Unix pipes, so you can easily grep for errors: ``` > glab ci trace 1209237 | grep -i failed Queries failed to execute: 10 ERROR: Job failed: exit code 50 ``` ## Linting Speaking of errors - you can incorporate them wonderfully into the CICD configuration file, the `.gitlab-ci.yml`. If you change this file, for example to add a new stage, and make a mistake, you will usually only notice it after you have pushed your changes and wonder why the pipeline does not start. Fortunately, you can check (“lint”) the configuration with `glab`. If an error has crept in, `glab ci lint` will detect it. ``` > glab ci lint Validating... .gitlab-ci.yml is invalid 1 (<unknown>): did not find expected key while parsing a block mapping at line 2 column 1 ``` After correction, the linting will then report success: ``` > glab ci lint Validating... ✓ CI/CD YAML is valid! ``` ## Working with Schedules I am particularly proud of the feature to create and run Pipeline Schedules with `glab`, because I implemented it. Pipeline Schedules are designed to automatically run pipelines at regular intervals. You can create these with `glab`. To do this, you pass a cron expression (which defines when the pipeline should run), a description, and the branch on which the pipeline should run: ``` > glab schedule create --cron "0 2 * * *" --description "Run main pipeline everyday" --ref "main" --variable "foo:bar" Created schedule ``` You can view the created pipeline schedule with `glab schedule list`: ``` > glab schedule list Showing 1 schedules on example/project (Page 1) ID Description Cron Owner Active 1038 Run main pipeline everyday * * * * * segu true ``` To run the pipeline schedule outside the defined rhythm, start it with `glab schedule run`: ``` > glab schedule run 1038 Started schedule with ID 1038 ``` And if it is no longer needed, you can simply delete it: ``` > glab schedule delete 1038 Deleted schedule with ID 1038 ``` ## glab API Not all functions that Gitlab offers are yet usable with `glab`. For such cases, it is possible to communicate directly with the Gitlab API using `glab api`. The command to display the pipeline schedules (`glab schedule list`) mentioned in the previous section can be replicated using a `glab api` call: ``` > glab api projects/:fullpath/pipeline_schedules/ [ { "id": 1038, "description": "Run main pipeline everyday", "ref": "main", "cron": "* * * * *", "cron_timezone": "UTC", "next_run_at": "2023-06-22T08:33:00.000Z", "active": true, "created_at": "2023-06-22T08:24:02.199Z", "updated_at": "2023-06-22T08:24:02.199Z", "owner": { "id": 97, "username": "segu", "name": "Sebastian Gumprich", "state": "active", } } ] ``` You can also delete the pipeline schedule this way: ``` > glab api projects/:fullpath/pipeline_schedules/1038 -X DELETE > glab api projects/:fullpath/pipeline_schedules/1038 glab: 404 Pipeline Schedule Not Found (HTTP 404) { "message": "404 Pipeline Schedule Not Found" } ``` ## Aliases To avoid having to remember the sometimes more complicated API calls, `glab` has functionality to create aliases. Two aliases are already set up by default, which you can display as follows: ``` > glab alias list ci pipeline ci co mr checkout ``` So if you want to check out a merge request, you can simply call `glab co` instead of `glab mr checkout`. You can define your own aliases as follows: ``` > glab alias set schedule_list 'api projects/:fullpath/pipeline_schedules/' - Adding alias for schedule_list: api projects/:fullpath/pipeline_schedules/ ✓ Added alias. ``` And of course, you can delete them again: ``` > glab alias delete schedule_list ✓ Deleted alias schedule_list; was api projects/:fullpath/pipeline_schedules/ ``` ## Set Variables from GitlabCI Locally Another useful `glab` feature is working with CICD variables. You can display, create, and delete these as well: ``` > glab variable list > glab variable set foo bar ✓ Created variable foo for 7001-07/nrwsp with scope * > glab variable get foo bar > glab variable delete foo ✓ Deleted variable foo with scope * for 7001-07/nrwsp ``` The variables created this way can be used locally in a simple manner. If you use Terraform, for example, you can set your TF\_VAR variables easily by setting the output of `glab variable get` as an environment variable. ``` export TF_VAR_db_root_password=$(glab variable get TF_VAR_db_root_password) export TF_VAR_secret_key=$(glab variable get TF_VAR_secret_key) export TF_VAR_access_key=$(glab variable get TF_VAR_access_key) ``` If you copy these `export`s into your README, each team member can set the correct Terraform variables with a simple copy-paste, without having to copy them from a password manager in a cumbersome way. ## Bash-Completion and Further Information If you want to know what else `glab` can do - the bash autocompletion shows it to you: ![glabs autocomplete shows explanations along with completions!](https://www.zufallsheld.de/images/autocomplete.gif) And many more details can of course be found on the [Homepage](https://docs.gitlab.com/ee/integration/glab/) of glab.
rndmh3ro
1,876,740
Gitlab von der Kommandozeile aus bedienen
Glab ist ein Opensource-Tool, das es ermöglicht mit Gitlab über die Kommandozeile zu arbeiten....
0
2024-06-06T10:02:10
https://www.zufallsheld.de/2024/06/03/gitlab-von-der-cli-bedienen/
gitlab, cli, glab
--- title: Gitlab von der Kommandozeile aus bedienen published: true date: 2024-06-03 13:00:00 UTC tags: gitlab,gitlab,cli,glab canonical_url: https://www.zufallsheld.de/2024/06/03/gitlab-von-der-cli-bedienen/ --- [Glab](https://gitlab.com/gitlab-org/cli#glab) ist ein Opensource-Tool, das es ermöglicht mit Gitlab über die Kommandozeile zu arbeiten. Dadurch entfällt das Wechseln zum Browser, um Merge Requests zu erstellen oder zu genehmigen, einen Pipeline-Lauf zu starten oder Issues anzusehen. Glab kann mit Repositories arbeiten, die auf gitlab.com gehostet sind, aber auch mit eigenen Gitlab-Instanzen. Das Tool erkennt automatisch, mit welcher Instanz es gerade arbeiten soll. Das CLI-Tool wurde von Clement Sam gestartet und ist seit 2022 ein offizielles Gitlab-Produkt. ## Setup Glab kann auf verschiedene Arten installiert werden. Da es in Golang geschrieben ist, kann die ausführbare Datei problemlos über die [Releases-Seite](https://gitlab.com/gitlab-org/cli/-/releases) heruntergeladen und ausgeführt werden. Alternativ ist Glab auch in verschiedenen Paket-Repositories verfügbar. Es läuft unter Linux, Windows und macOS. Alle Installationsvarianten kann man [hier](https://gitlab.com/gitlab-org/cli/-/blob/main/docs/installation_options.md) finden! ### Registrierung an der GitLab-Instanz Bevor man mit Repositories arbeiten kann, muss man sich an der Gitlab-Instanz authentifizieren. Hierfür benötigt man ein Personal Access Token, welches man sich in seinem Gitlab-Profil erstellen kann. Im Falle der gitlab.com-Instanz ist die Erstellung [hier](https://gitlab.com/-/profile/personal_access_tokens) zu finden. Man vergibt einen beliebigen Namen für das Token und wählt die Berechtigungen “api” und “write\_repository”. Das generierte Token wird dann im nächsten Schritt benötigt. Nun meldet man sich mit dem Token an der Gitlab-Instanz an, indem man `glab auth login` ausführt und die gestellten Fragen beantwortet. ``` > glab auth login ? What GitLab instance do you want to log into? gitlab.com - Logging into gitlab.com ? How would you like to login? Token Tip: you can generate a Personal Access Token here https://gitlab.com/-/profile/personal_access_tokens The minimum required scopes are 'api' and 'write_repository'. ? Paste your authentication token: **************************? Choose default git protocol HTTPS ? Authenticate Git with your GitLab credentials? Yes - glab config set -h gitlab.com git_protocol https ✓ Configured git protocol - glab config set -h gitlab.com api_protocol https ✓ Configured API protocol ✓ Logged in as rndmh3ro ``` Den erfolgreichen Login kann man mittels `glab auth status` überprüfen. ``` > glab auth status gitlab.com ✓ Logged in to gitlab.com as rndmh3ro (/home/segu/.config/glab-cli/config.yml) ✓ Git operations for gitlab.com configured to use https protocol. ✓ API calls for gitlab.com are made over https protocol ✓ REST API Endpoint: https://gitlab.com/api/v4/ ✓ GraphQL Endpoint: https://gitlab.com/api/graphql/ ✓ Token: ************************** git.example.com ✓ Logged in to git.example.com as segu (/home/segu/.config/glab-cli/config.yml) ✓ Git operations for git.example.com configured to use https protocol. ✓ API calls for git.example.com are made over https protocol ✓ REST API Endpoint: https://git.example.com/api/v4/ ✓ GraphQL Endpoint: https://git.example.com/api/graphql/ ✓ Token: ************************** ``` ## Mit Repositories arbeiten Hat man sich erfolgreich an der GitLab-Instanz angemeldet, kann man mittels glab mit Repositories arbeiten. ## Repository klonen Zu erst einmal will man natürlich Repositories mittels `glab` clonen. Dazu ruft man `glab repo clone path/to/repo` auf, gefolgt von einem optionalen Zielverzeichnis. ``` > glab repo clone gitlab-org/cli Cloning into 'cli'... remote: Enumerating objects: 18691, done. remote: Counting objects: 100% (72/72), done. remote: Compressing objects: 100% (34/34), done. remote: Total 18691 (delta 53), reused 39 (delta 37), pack-reused 18619 Receiving objects: 100% (18691/18691), 22.98 MiB | 5.97 MiB/s, done. Resolving deltas: 100% (12391/12391), done. ``` Hat man mehrere Repositories in einer Gruppe, die man clonen möchte, kann man dies mittels `glab` ebenfalls tun. Hierzu muss man die `--group`-Option (oder `-g`) nutzen. Damit werden nacheinander alle Repositories der Gruppe gecloned: ``` > GITLAB_HOST=gitlab.com glab repo clone -g gitlab-org fatal: destination path 'verify-mr-123640-security-policy-project' already exists and is not an empty directory. Cloning into 'verify-mr-123640'... remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done. Cloning into 'without-srp'... remote: Enumerating objects: 30, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (10/10), done. remote: Total 30 (delta 7), reused 4 (delta 4), pack-reused 16 Receiving objects: 100% (30/30), 5.94 KiB | 5.94 MiB/s, done. Resolving deltas: 100% (9/9), done. Cloning into 'container-scanning-with-sbom'... ``` ## Mit Merge Requests arbeiten Nachdem das Repository ausgecheckt wurde, kann man mit der Arbeit an Issues oder Merge Requests (MRs) beginnen. Ein normaler Prozess zur Codeänderung sieht meist so aus: Man führt seine Codeänderungen durch, committed sie und pushed sie anschließend in das Gitlab. Daraufhin möchte man einen Merge Request erstellen. Normalerweise würde man nun zur Gitlab-Website wechseln und dort den MR erstellen. Dank `glab` muss man die Kommandozeile aber nicht verlassen. Mittels `glab mr create` wird interaktiv ein MR erstellt. Dabei wird man durch den Erstellungsprozess geführt, indem man Titel und Beschreibung angibt und anschließend gefragt wird, ob man den MR direkt erstellen will oder ihn sich im Webfrontend ansehen will, bevor er erstellt wird. ``` > glab mr create ? Choose a template Open a blank merge request ? Title: New Feature ? Description <Received> ? What's next? Submit Creating merge request for test into master in gitlab-org/cli !351 New Feature (test) https://gitlab.com/gitlab-org/cli/-/merge_requests/1297 ``` Danach kann man ihn sich ansehen. Möchte man das im Browser tun, ruft man `glab mr view` mit dem Parameter `--web` (oder `-w`) auf. ``` > glab mr view -w 1297 ``` Man kann sich die gleichen Inhalte aber auch direkt auf der Kommandozeile ansehen (inklusive Kommentare): ``` > glab mr view 1297 -R gitlab-org/cli open • opened by rndmh3ro about 1 hour ago docs: add installation options with wakemeops !1297 ## Description Add installation options with wakemeops-repository. Note: I'm not affiliated with WakeMeOps, just a happy user. ## Related Issues Resolves #1363 ## How has this been tested? not at all. ## Screenshots (if appropriate): ### Types of changes [] Bug fix (non-breaking change which fixes an issue) [] New feature (non-breaking change which adds functionality) [] Breaking change (fix or feature that would cause existing functionality to change) [✓] Documentation [] Chore (Related to CI or Packaging to platforms) [] Test gap 0 upvotes • 0 downvotes • 5 comments Labels: Community contribution, documentation, linked-issue, tw::triaged, workflow::ready for review Assignees: rndmh3ro Reviewers: aqualls Pipeline Status: success (View pipeline with `glab ci view add_wakemeops_docs`) Approvals Status: Rule "All Members" insufficient approvals (0/1 required): Rule "/docs/" sufficient approvals (0/0 required): Amy Qualls aqualls - ✓ This merge request has 1 changes View this merge request on GitLab: https://gitlab.com/gitlab-org/cli/-/merge_requests/1297 ``` Arbeitet man gerade nicht selbst an der Codebasis sondern sieht sich MRs anderer Personen an, kann man sich mittels `glab mr list` offene Merge-Requests anzeigen lassen. ``` > glab mr list Showing 22 open merge requests on gitlab-org/cli (Page 1) !1297 gitlab-org/cli!1297 docs: add installation options with wakemeops (main) ← (add_wakemeops_docs) !1296 gitlab-org/cli!1296 fix(repo view): consider current host when viewing a repo details (main) ← (jmc-1334) !1295 gitlab-org/cli!1295 chore(ci): remove ssh key from build (main) ← (jmc-remove-ssh) ``` Im Anschluss checkt man den MR aus, den man sich ansehen möchte: ``` > glab mr checkout 1297 ``` Ist man mit dem Inhalt zufrieden, kann man dem Merge Request zum Beispiel eine Notiz hinzufügen und anschließend direkt approven: ``` > glab mr note -R gitlab-org/cli -m "LGTM" > glab mr approve - Approving Merge Request !1297 ✓ Approved ``` Und natürlich kann man den MR auch gleich mergen: ``` > glab mr merge ? What merge method would you like to use? Rebase and merge ? What's next? Submit ✓ Rebase successful ! No pipeline running on test ✓ Rebased and merged !1297 New Feature (test) https://gitlab.com/gitlab-org/cli/-/merge_requests/1297 ``` Um sich am Ende des Tages seine gemergeden MRs anzusehen, gibt man `glab mr list` Optionen mit, um nur gemergede (`-M`) oder meine eigenen (`-a @me`) MRs anzusehen: ``` > glab mr list -M -a @me Showing 4 merged merge requests on gitlab-org/cli (Page 1) !1279 gitlab-org/cli!1279 feat(schedule): Add commands to create and delete schedules (main) ← (create_del_sched) !1176 gitlab-org/cli!1176 feat(schedule): Add command to run schedules (main) ← (run_schedule) !1143 gitlab-org/cli!1143 docs: remove duplicate defaults in help (main) ← (fix_help_doc) !1112 gitlab-org/cli!1112 feat(schedule): Add command to list schedules (main) ← (sched_list) ``` ## Mit Pipelines arbeiten Code-Änderungen werden durch eine automatische CICD-Pipeline getestet. Natürlich bietet `glab` die Möglichkeit, mit Pipelines zu arbeiten. Zum Starten einer Pipeline auf dem Main-Branch ruft man folgenden Befehl auf: ``` > glab ci run -b main Created pipeline (id: 540823 ), status: created , ref: main , weburl: https://git.example.com/example/project/-/pipelines/540823 ) ``` Den Status der Pipeline kann man sich so ansehen: ``` > glab ci status (failed) • 01m 11s lint test https://git.example.com/example/project/-/pipelines/540812 SHA: 275cb8295c69db166e1b1c94936d4c4b67463701 Pipeline State: failed ? Choose an action: Exit ``` Ist eine Pipeline fehlgeschlagen, kann man sich die Logs mittels `glab ci trace` ansehen: ``` > glab ci trace Searching for latest pipeline on test... Getting jobs for pipeline 540812... ? Select pipeline job to trace: kics-scan (1209237) - failed Getting job trace... Showing logs for kics-scan job #1209237 Running with gitlab-runner 14.10.1 (f761588f) on example-shared-docker swAou6b9 Resolving secrets Preparing the "docker" executor ``` `glab` arbeitet hervorragend mit Unix-Pipes zusammen - so kann man sich Fehler einfach heraus greppen: ``` > glab ci trace 1209237 | grep -i failed Queries failed to execute: 10 ERROR: Job failed: exit code 50 ``` ## Linting Wo wir von Fehlern sprechen - man kann sie wunderbar in der CICD-Konfigurationsdatei, der `.gitlab-ci.yml` einbauen. Ändert man diese Datei, weil man zum Beispiel eine neue Stage einbauen möchte und macht dabei einen Fehler, bekommt man diesen normalerweise erst mit, wenn man seine Änderungen gepushed hat und sich wundert, warum die Pipeline nicht losläuft. Zum Glück kann man mit `glab` die Konfiguration überprüfen (“linten”). Hat sich ein Fehler eingeschlichen, wird `glab ci lint` diesen feststellen. ``` > glab ci lint Validating... .gitlab-ci.yml is invalid 1 (<unknown>): did not find expected key while parsing a block mapping at line 2 column 1 ``` Nach der Korrektur meldet das Linting dann Erfolg: ``` > glab ci lint Validating... ✓ CI/CD YAML is valid! ``` ## Mit Schedules arbeiten Auf das Feature, Pipeline Schedules mittels `glab` zu erstellen und laufen zu lassen, bin ich ganz besonders stolz, denn ich habe sie implementiert. Pipeline Schedules sind dafür da, Pipelines in regelmäßigen Abständen automatisch laufen zu lassen. Man kann diese per `glab` anlegen lassen. Dazu übergibt man dem Befehl eine Cron-Expression (die definiert, wann die Pipeline laufen soll), eine Beschreibung sowie den Branch, auf dem die Pipeline laufen soll: ``` > glab schedule create --cron "0 2 * * *" --description "Run main pipeline everyday" --ref "main" --variable "foo:bar" Created schedule ``` Die so angelegte Pipeline schedule kann man sich per `glab schedule list` ansehen: ``` > glab schedule list Showing 1 schedules on example/project (Page 1) ID Description Cron Owner Active 1038 Run main pipeline everyday * * * * * segu true ``` Um die Pipeline Schedule außerhalb des definierten Rythmus laufen zu lassen, startet man sie mit `glab schedule run`: ``` > glab schedule run 1038 Started schedule with ID 1038 ``` Und wird sie nicht mehr benötigt, kann man sie einfach löschen: ``` > glab schedule delete 1038 Deleted schedule with ID 1038 ``` ## glab API Noch sind nicht alle Funktionen, die Gitlab bietet, auch mittels `glab` nutzbar. Für solche Fälle ist es möglich, mittels `glab api` direkt mit der Gitlab-API zu kommunizieren. Den im vorigen Abschnitt erwähnte Befehl zum Anzeigen der Pipeline Schedules (`glab schedule list`) kann man mittels eines `glab api`-Aufrufes nachbilden: ``` > glab api projects/:fullpath/pipeline_schedules/ [ { "id": 1038, "description": "Run main pipeline everyday", "ref": "main", "cron": "* * * * *", "cron_timezone": "UTC", "next_run_at": "2023-06-22T08:33:00.000Z", "active": true, "created_at": "2023-06-22T08:24:02.199Z", "updated_at": "2023-06-22T08:24:02.199Z", "owner": { "id": 97, "username": "segu", "name": "Sebastian Gumprich", "state": "active", } } ] ``` Auch das Löschen der Pipeline Schedule ist so möglich: ``` > glab api projects/:fullpath/pipeline_schedules/1038 -X DELETE > glab api projects/:fullpath/pipeline_schedules/1038 glab: 404 Pipeline Schedule Not Found (HTTP 404) { "message": "404 Pipeline Schedule Not Found" } ``` ## Aliases Damit man sich die teils komplizierteren API-Aufrufe nicht merken muss, existiert in `glab` die Funktionalität, Aliase anzulegen. Es sind bereits zwei Aliase standardmäßig eingerichtet, die man sich wie folgt anzeigen lassen kann: ``` > glab alias list ci pipeline ci co mr checkout ``` Möchte man also einen Merge Request auschecken, kann man statt `glab mr checkout` einfach `glab co` aufrufen. Eigene Aliase kann man wie folgt definieren: ``` > glab alias set schedule_list 'api projects/:fullpath/pipeline_schedules/' - Adding alias for schedule_list: api projects/:fullpath/pipeline_schedules/ ✓ Added alias. ``` Und natürlich kann man sie auch wieder löschen: ``` > glab alias delete schedule_list ✓ Deleted alias schedule_list; was api projects/:fullpath/pipeline_schedules/ ``` ## Variablen aus der GitlabCI lokal setzen Ein weiteres nützliches `glab`-Feature ist, mit CICD-Variablen zu arbeiten. Auch diese kann man sich anzeigen lassen, sie erstellen und sie löschen: ``` > glab variable list > glab variable set foo bar ✓ Created variable foo for example/project with scope * > glab variable get foo bar > glab variable delete foo ✓ Deleted variable foo with scope * for example/project ``` Die so erstellten Variablen lassen sich auf einfache Art und Weise auch lokal nutzen. Nutzt man beispielsweise Terraform, kann man seine TF\_VAR-Variablen ganz einfach setzen, indem man die Ausgabe von `glab variable get` als Umgebungsvariable setzt. ``` export TF_VAR_db_root_password=$(glab variable get TF_VAR_db_root_password) export TF_VAR_secret_key=$(glab variable get TF_VAR_secret_key) export TF_VAR_access_key=$(glab variable get TF_VAR_access_key) ``` Kopiert man diese `export`s in seine README, kann jedes Teammitglieder mit einem einfachen Copy-Paste die korrekten Terraform-Variablen setzen, ohne sie umständlich aus einem Passwort-Manager zu kopieren. ## Bash-Completion und weitere Infomationen Wer nun wissen möchte, was `glab` noch alles kann - die bash autocompletion zeigt es einem: ![glabs autocomplete zeigt neben den Completions auch Erklärungen dazu an!](https://www.zufallsheld.de/images/autocomplete.gif) Und viele weitere Informationen findet man natürlich auf der [Homepage](https://docs.gitlab.com/ee/integration/glab/) von glab.
rndmh3ro
1,875,369
Does Serverless Still Matter?
No. Short, simple, and direct. The answer to the question is that serverless at this point and time...
0
2024-06-03T12:59:53
https://www.binaryheap.com/does-serverless-still-matter/
serverless
No. Short, simple, and direct. The answer to the question is that serverless at this point and time doesn't matter. Now I'm not saying that it's never mattered. But what I am saying is that it's just a tool in a developer's toolchain. It's not some sweeping "movement" that it was and I firmly believe that this is all OK. I don't see this as doom and gloom. It's more about WOW, that happened, now what's next. With that, let's look at how we got here and where I think we go from here. ## The Serverless Arc There are always people coming in and out of a community or technology ecosystem. Serverless is no different. And while some might say serverless is new and unproven, they'd be mistaken. Google began shipping pay-as-you-go compute in the late 2000s but it wasn't until AWS released Lambda in 2014 that the serverless banner was hung in the cloud. That gives the services and patterns more than 10 years of real-world production deployments. There's been time to learn, fail, and harden so that its use cases can be clearly defined and exploited. Additionally, as computing continues to improve, the lines have gotten blurry when having to decide to choose always-on vs event-driven serverless computing. The performance of serverless compute is almost on par with those in the full time workload camp. Anyone who says differently hasn't been paying attention. Serverless has been deployed successfully in some of the most demanding of cloud-native businesses. Let's pretend for a minute that you need even more convincing, then [here's a great whitepaper](https://www.gomomento.com/resources/downloadable-resources/a-guide-to-unlocking-serverless-at-enterprise-scale) that drives these points home even further. Spoiler, you might recognize the author. Serverless has a story and with all stories, there is a beginning, middle, and end. ### The Early Days My perspective doesn't come from the excitement of launching new products and services in one of the big vendors but from that of a cloud builder and community member. Sure, I haven't always been as active as I am now, but I am an early adopter who approaches things with a healthy dose of skepticism. I don't want to make bets on things that end up having to be replaced because the tech was abandoned. I feel like serverless was born during a time when service buses were dying and the birth of microservices and containers was happening as well. I lived through the container wars and container orchestration discussions and remember how easy it was for serverless to slide under the radar. It wasn't until 2015 that I actually got my hands on Lambda and then in 2016 when I put something in production powered by this thing called serverless compute. If you've heard me say serverless is more than just compute. That's true now, but it wasn't always that way. From a community and builder standpoint, AWS didn't make quite the push that I remember. I believe that early practitioners and precursors to the Developer Advocate explosion were building patterns and materials to onslaught the market with what their engineering teams had produced. Again, this isn't backed by specific inside knowledge, only my perception and what I imagined would have happened. Serverless to me had gotten lost in the shuffle while the architecture and developer community leaned into container-based distributed APIs. The early days though just like any set of early days were filled with hope, promise, and a chance at changing the status quo. ### Mid-Life At this point in the serverless arc, things really started to pick up. If I was going to put a time on things, I'd say mid-life started in late 2016 and we are currently living in these same times. From my vantage point, there was a massive energy that was released from AWS and others to saturate the market with quality materials, samples, and patterns so that any builder looking to jump on the train had an easy on-ramp. The serverless energy was almost like the early days of the iPhone. You almost couldn't help but buy into the hype. Because honestly, it was hype. There were limited runtimes, not nearly as many connection points that exist now, DynamoDB modeling was proven in-house but not so much in industry, and Lambda itself suffered massive spin-up times. Some of these issues limited a builder to using Lambda in only asynchronous type workflows. I know it's hard to believe, but there was no EventBridge or Step Functions either. Seriously, early times had a bunch of hype mixed in with a great deal of promise. It was that promise that fueled a movement. A movement that AWS and others invested heavily in by encouraging community and online discord to the point of it being everywhere. I've been doing this since the mid-nineties and I've never seen a push and rally behind something quite like this. Docker, Ruby on Rails, Java, .NET, and the current version of AI are the only things that I remember in my career that have come this close. If I take a step back and look at why serverless is so interesting to me, it's because it's not like Docker, RoR, or Java. Those were open source projects that had tremendous support from community members. We all know how passionate open source contributors can be. And yes, I remember Java started with Sun but it did get released as open source. It also was heavily supported by people who wanted to work in Free and Open Source technologies. At the time, Java == Linux and Linux wasn't Microsoft. So why am I digressing here? Because serverless has nothing to do with open source. I get it, AWS Firecracker which powers Lambda is open source, but serverless in and of itself is not a technology. It's a description or an umbrella that a capability lives under. When I look at these facts, I find the whole thing so interesting. The communities that stood up around serverless were quasi-corporate sponsored communities and that hadn't happened before in my memory. Not at the scale and the force that serverless did it. Zeroing back in on the present, I do believe that we are at the tail end of the mid-life arc for serverless. For clarity, I don't think serverless is done by any stretch. Capabilities still need to be added, integrations built, continued work on observability, and generally more undifferentiated heavy lifting to take care of. But it feels to me like we've entered into a new space. The ones that launched this "run code without worrying about infrastructure" movement have gotten distracted by the next disruptor. In all fairness, this happens in any industry and with any technology. Innovation is like breathing. But what I don't like about what I see with the serverless ecosystem is this. If the iPhone is analogous to an appliance at this point and it can only take quality-of-life updates, then I believe that serverless is getting close to that point. I don't think it has to reach that point but with a lack of innovation from the big providers, the movement will begin to lose steam. Enter the end-of-life phase. ### End of Life Everything gets here. Software, animals, people. We are born, we live, we die. As I mentioned above, serverless will eventually get to the point that it's like the iPhone. It will receive quality-of-life updates and those that market and sell will continue to make each release cycle sound like the next best thing is here. ElastiCache and OpenSearch Serverless sound familiar? But truthfully, builders can smell and feel what's not all the way real. This isn't a bad thing honestly at its core. All of the serverless code and applications in production can't just "go away". AWS, Google, and Microsoft will continue to run these workloads for us and the software systems we've built will continue to live on. Code spends more time in maintenance than in any other phase of its life. However, what will happen is that the energy, content, and communities will also slowly spin down and we will leave the era of "run code without thinking about servers" and move into the world of what's next. If current trends follow, it'll be the world of AI and the creation of code without servers as well. So we went from running and not caring about the infrastructure to now generating code that we don't care what infrastructure created it. If we enter the end-of-life phase and you don't realize the impact that serverless has had, you truly haven't been paying attention. The point of acknowledging this though is important. Serverless won't end because it wasn't impactful, meaningful, or real. All things go out of style especially something that was corporate-backed. They will move on to what's next because that's how you innovate, make money, and generate more value. Serverless was important. ## What's the Point? Now that we've taken that detour through what I believe is the arc of serverless, how can I possibly say that serverless doesn't matter? If you look me up online, you'll see that I'm an AWS Community Builder focused on serverless, I'm an active writer and code producer who is very often serverless, and I'm a Champion in the [Believe in Serverless community](https://www.believeinserverless.com/). I can believe in the arc I shared above and believe in serverless itself. Those things aren't at odds. And here's why. ### The Phoenix If I was casting a vision for the future, here's what I think. Serverless the big corporate-sponsored version is on the slide towards the end of life era. But just like the Phoenix from mythology, serverless has a chance to be reborn and rise from its ashes. You can see that happening actually now. Kind of weird that death and life are happening at the same time, but they are. The most amazing thing that AWS, Google, and Microsoft have given to the world is the gift of obscene amounts of compute and wonderfully built infrastructure. That infrastructure provides us as builders compute power beyond our wildest dreams. But not beyond the visions of new leaders in the serverless product space. New products are being created seemingly overnight. Products like [Momento](https://www.gomomento.com/) are not building serverless cache, they are reimagining caching and application performance while solving the problems with a serverless mindset. [Serverless Postgres](https://neon.tech/) is now a thing. And companies that have traditionally been installed are now embracing serverless. Just look at [InfluxDB](https://www.influxdata.com/) which is now offering a serverless version. Serverless is going to be reborn because the promises it makes are sound and good for developers and businesses. Businesses that are buying products built with serverless and good for businesses that are building serverless offerings. If I had to look forward 5 years ahead, I see a world where more companies like this are spinning up and filling the gaps that AWS, Google, and Microsoft are leaving by being so heavily invested in AI. And by the time those giants spin back around, maybe they can buy their way back in, or maybe they won't want to, but we as builders will have moved on as well. Not without serverless, but without big corporate serverless. ### Value over Dogma I mentioned it above, but other successful "movements" in tech were fostered and cared for by passionate open source contributors. Serverless doesn't share those roots as like I've mentioned, was born out of companies. But what has happened is that the serverless movement, along with a boost from these new upstart vendors has the human capability to carry forward in this new world. Communities like [Believe in Serverless](https://discord.gg/ys7wtdwCC5) are fostering collaboration and engagement regardless of your flavor of serverless or programming language of choice. What I find so interesting about what I see right now is that the online discourse has moved passed talking about the far left or far right of serverless and is just talking about delivering value and solving problems. The word serverless rarely comes up. The focus has found its way to value, users, and developer experience. Which is right where it always is down the center of the tech world. What I also find unique to this version of the serverless community is that it's open not just to a vendor but not even to being serverless. The concept of serverless only pushed too far into the dialogue and I believe that was true because of where it was coming from. The truth is, it can be serverless only, but almost every deployment is going to be serverless plus. And the most responsible thing a serverless architect can do is be serverless first but not serverless always. It just doesn't make sense. And this community gets that. It's different than what it was like in years previous. Version x.0 of Serverless is a much more moderate and tempered crowd. Which ultimately is a great thing. So again, serverless doesn't matter. Value has always mattered. And what's being shown is that serverless plays a role in shipping value. But honestly, it always had. ### People I always end up back here, don't I? I believe strongly that there is more humanity in tech than people want to acknowledge. Sure, algorithms, data structures, transistors, power, and everything in between are very scientific. But just like there are physical aspects to a building, if it didn't deliver a solid user experience, the building wouldn't sell. Software is like this but at a higher level. You can't build good software without the help of others. And you can't build a good community or support a movement like serverless without amazing people. I've said this from day 1 as being public in the serverless and tech community, that I wouldn't be doing or sharing most of this content if it wasn't for the people. Serverless doesn't matter to me because I could be writing COBOL code with the people I've met as a part of this movement and it would be A-OK by me. 25 years ago I was involved in Linux communities because the people were awesome to hang out with. And serverless to me has that same feel. And it's something I give AWS credit for even beyond the software and the marketing that launched serverless into the world like the Hulk Ride at Universal Orlando. They launched serverless with amazing people. And they recruited heavily to build communities and groups that also had quality humans at the core. Those are facts that just underscore for me that serverless doesn't matter. People do. They always have and they always will long after the world realizes that GenAI is just the next fad. ## Wrapping Up And even though these times are fading, the people aren't. We are just finding other ways to organize and collaborate. And if the computers and the AI take away this craft called programming that I love dearly, I'll still have the friends and relationships that I've made through being in this community. And then perhaps we'll have more time on our hands to do things IRL vs always being virtual. Who knows. But I do know this. Serverless mattered. A computing movement was built that helped shape this next phase and the world is better for having had this happen. But I also know that it mattered for reasons behind the compute. It mattered because of the community that was born from it. Artificial, manufactured, or cultivated, who cares? It happened. And what happens next is also why serverless no longer matters. Because the people that came together matter more and the future is brighter than it's ever been. Thanks so much for reading this different piece. And happy building!
benbpyle
1,875,368
Class Constructors
A class constructor is a special method in object-oriented programming that is automatically called...
0
2024-06-03T12:58:00
https://dev.to/shantel57427931/class-constructors-j7h
webdev, javascript, beginners, programming
A class constructor is a special method in object-oriented programming that is automatically called when an instance (object) of the class is created. The primary purpose of a constructor is to initialize the newly created object. In many programming languages, including Python, C++, and Java, constructors have specific syntax and behavior. Let's look at examples in a few popular languages: Python In Python, the constructor method is defined using the __init__ method. This method is called when an object is instantiated. Java In Java, the constructor has the same name as the class and does not have a return type, not even void. JavaScript In JavaScript, a class constructor is a special method used for creating and initializing an object created within a class. The constructor method is called automatically when a new instance of the class is created using the new keyword.
shantel57427931
1,875,366
Enhancing Guest Comfort: The Role of Hotel Amenities
Enhancing Guest Comfort: The Role of Hotel Amenities Resorts have come a very long way coming from...
0
2024-06-03T12:56:39
https://dev.to/jonathan_hulseh_7a93b8f43/enhancing-guest-comfort-the-role-of-hotel-amenities-2d5c
Enhancing Guest Comfort: The Role of Hotel Amenities Resorts have come a very long way coming from simply offering fundamental requirements such as a roofing system over one's move as well as a mattress to rest on. As competitors end up being fiercer in the friendliness market, hoteliers are constantly developing methods towards separating their brand name as well as offering included worth to their visitors. One element that has shown efficiency in enhancing guest comfort is amenities. Benefits of Hotel Amenities Hotel amenities describe the additional that visitor’s get on leading fundamental lodgings, like toiletries, bed linen, and towels, as well as in-room home enjoyment. Certainly, there are several benefits to providing hotel amenities, some of which include: 1. Enhanced guest expertise - Offering visitors along with thoughtful additional that accommodate their wants and needs equates towards pleased as well as pleased clients. This can easily lead to favorable evaluations as well as replay reservations. 2. Affordable side - Providing distinct, ingenious, as well as top-quality amenities can easily set a hotel aside from various other residential or commercial homes. As tourists end up being much a lot extra discerning, amenities could be a choice when considering their option of lodging. 3. Enhanced income - While some amenities are free, others happen at an extra expense, which can easily produce income for the hotel. Development in Hotel Amenities As visitors end up being much a lot extra tech-savvy, luxury hotel amenities should stay up to date with the opportunities. Ingenious amenities that integrate innovation can easily improve the guest's expertise as well as offer included benefits. Some instances consist of: 1. Smart-room innovation - This consists of functions that could be managed through a mobile phone application or even vocal aid, like temperature level command, illumination, as well as home enjoyment. 2. Online as well as augmented truth - These innovations could be utilized in different methods, like offering visitors online trips or even enhancing in-room home enjoyment. 3. Robotic butlers - Some resorts have presented robotic butlers that can easily provide products such as towels, treats, as well as beverages. Security of Hotel Amenities While hotel amenities are developed to improve the guest expertise, security is critical. Resorts should guarantee that their amenities are risk-free for guests to utilize and that all security treatments are interacted with. Some methods for guaranteeing security consist of the following: 1. Routine upkeep - This consists of inspecting all of the amenities to guarantee they remain in great problem. 2. Screening, as well as accreditation - Resorts, should guarantee that any type of brand-brand new amenities contributed to the residential or commercial home satisfy security requirements as well as have the required accreditation. 3. Unobstructed directions - All amenities should include unobstructed directions as well as cautions to avoid mishaps. Utilizing Hotel Amenities To make the most of the worth of hotel amenities, visitors should understand ways to utilize all of them efficiently. Resorts can easily offer directions as well as suggestions on ways to utilize different amenities, like: 1. In-room home enjoyment - Visitors could be provided directions on ways to access various networks, utilize push-button control, as well as link their gadgets to the TV. 2. Restroom amenities - Toiletries such as hair shampoo, as well as conditioner, can easily include directions on just the amount to utilize as well as ways to use them. 3. Space solution - Resorts can easily offer an extensive food selection as well as provide visitors suggestions on ways to purchase. Quality of Hotel Amenities The quality of hotel amenities can easily create or even breathe a guest's expertise. Resorts should aim towards offering top-quality amenities that satisfy guests' assumptions. This can easily consist of: 1. Utilizing reliable providers - Resorts should resource their amenities coming from reliable providers understood for their quality. 2. Uniformity - All of the amenities should be actually of a constant quality, therefore visitors understand exactly just what towards anticipate. 3. Routine assessment - Resorts should routinely assess the quality of their amenities as well as create modifications as required. Application of Hotel Amenities The application of hotel amenities can easily differ depending upon the kind of hotel as well as its target audience. For example, company resorts might concentrate on hotel amenities set luxury that accommodate the requirements of company tourists, like a company facility, flight terminal move, as well as conference spaces. On the various other palms, a family-friendly hotel might focus on amenities that accommodate households, like a swimming pool, a backyard, as well as a babysitting solution. Source: https://www.kailai-amenity.com/application/hotel-amenities
jonathan_hulseh_7a93b8f43
1,875,364
comment on I need a hacker to recover money from binary trading
It began with a serendipitous encounter—a testimonial shared by Britney, a fellow traveler on the...
0
2024-06-03T12:52:10
https://dev.to/harold_myoung_8d1d29c119/comment-on-i-need-a-hacker-to-recover-money-from-binary-trading-228h
It began with a serendipitous encounter—a testimonial shared by Britney, a fellow traveler on the winding road of digital assets. Her words spoke of a miraculous recovery the Linux Cyber Security Company orchestrated. Intrigued and desperate for a lifeline in my crypto conundrum, I embarked on a journey that would redefine my perception of trust in the digital realm. Upon reaching out to Linux Cyber Security Company, my heart raced with anticipation, hoping they could work their magic on my blocked crypto recovery. With bated breath, I shared my plight, providing evidence of my dilemma and praying for a solution. To my astonishment, a prompt response came—a beacon of reassurance in the sea of uncertainty. Linux Cyber Security Company wasted no time in assessing my situation, guiding me through the process with patience and expertise. Like a celestial conductor orchestrating a symphony of redemption, Linux Cyber Security Company navigated the complexities of my predicament with finesse and precision. And then, like a ray of sunshine breaking through the clouds, came the news I had longed for—a new private key had been generated, restoring my precious bitcoins. It was a moment of jubilation, a triumph over adversity that filled me with newfound hope. But Linux Cyber Security Company capabilities transcended the mere retrieval of lost assets. Delving deeper into their expertise, I discovered a wealth of knowledge in reclaiming lost or stolen cryptocurrency, and even exposing fraudulent investment schemes. Their dedication to unraveling the intricacies of online fraud is not merely a profession, but a calling—a commitment to safeguarding the integrity of the digital landscape. Through their unwavering support and commitment to integrity, Linux Cyber Security Company has become more than just skilled technicians—they are guardians of trust in the digital realm. With their assistance, I reclaimed what was rightfully mine, turning despair into determination and paving the way for a brighter future. Take heart to all those who find themselves entangled in the web of online fraud. With the guidance of Linux Cyber Security Company, there is a path to redemption—light amidst the shadows of uncertainty. Trust their expertise, and let them illuminate the financial restoration and digital resilience path. Linux Cyber Security Company is a testament to the power of perseverance and knowledge in an ever-evolving digital landscape. With their unwavering commitment to integrity and unparalleled skill in navigating the complexities of cryptocurrency recovery, they have earned my deepest gratitude and highest recommendation. Website: [ www.linuxcybersecurity.com ] Email: [support@linuxcybersecurity.com ]
harold_myoung_8d1d29c119
1,875,362
Difference between Synchronous and Asynchronous Java Script
Synchronous and asynchronous JavaScript refer to different approaches for handling operations,...
0
2024-06-03T12:46:57
https://dev.to/shantel57427931/difference-between-synchronous-and-asynchronous-java-script-39o4
webdev
Synchronous and asynchronous JavaScript refer to different approaches for handling operations, particularly those that may involve waiting, such as I/O operations, network requests, or timers. Synchronous JavaScript Synchronous operations are those that block the execution of subsequent code until they are completed. In synchronous JavaScript, tasks are performed one after another, in sequence. This means that each operation must wait for the previous one to complete before executing. Characteristics: Blocking: The execution of code stops at each step until the current operation finishes. Simple to understand: The flow of code is straightforward and predictable. Potentially slow: Long-running operations (e.g., file reading, network requests) can block the entire execution flow, leading to slow performance and an unresponsive user interface. Asynchronous JavaScript Asynchronous operations allow the execution of subsequent code without waiting for the current operation to complete. Asynchronous JavaScript enables concurrent execution, meaning that tasks can be initiated and then completed at a later time, without blocking the flow of execution. Characteristics: Non-blocking: Subsequent code can execute immediately, without waiting for the current task to finish. Complex: Managing the flow of asynchronous operations can be more complicated, involving callbacks, promises, or async/await. Efficient: Asynchronous operations can improve performance, especially for tasks like network requests or file I/O, by allowing other code to run while waiting for the operation to complete.
shantel57427931
1,873,862
INTRODUCTION TO REACT JS
React is a JavaScript library used to create interactive single page applications. In traditional...
0
2024-06-03T12:46:02
https://dev.to/kemiowoyele1/introduction-to-react-js-10j
React is a JavaScript library used to create interactive single page applications. In traditional websites, every click to page links is fetched from the server in the backend. React and other single page applications introduced ways in which every of that could be handled in the browser. A single index html page is fetched from the server, and in the case of react, react will handle all other page loads from the front end. Creating an app with react is often done with the command line tool. React requires some configurations and setups that come with the tool. To use this tool, a modern version of node js must be installed in your computer. To confirm if you have node in your computer, open any terminal in your computer. If you are on windows, open the command prompt (cmd) and type `node –v` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2d08wlkx90c7onsc3vx.png) If a version is displayed to you, it means you have node installed in your computer. Also if the version you have is less than 5.2, you need to install a newer version. If you don’t have it, go to nodejs.org and install the latest version. Still in the terminal, navigate into the directory you want to create the project. Type in `npx create-react-app projectName` npx will run the code on the internet and install a new react project with the name we put in place of projectName above. Press enter and wait for a while for it to create the project for you. Once done, cd into the new project and type npm start To view the starter project, you can then open up the project in your text editor. Most of the files we are going to work with are going to be in the public folder. The index.html file is the only html file that will be returned from the server. In this file, there is a div with id of root that will be mounted on react. This div will render all of our js codes as appropriate. You are not expected to do anything on this file, as what needs to be done is already taken care of by the create-react-app. Another important file is the main.js or insex.js file. This file is where the root element is mounted to the reactDOM. It is also responsible for rendering the root component. You are also to leave this file as it is. The App.js file is the root component file. This is where you can create your own component. The follow come component is already rendering some default react stuff. You may erase all that and replace with what you desire to render. To view your page on the browser, you spin up the local development server by typing in the terminal; ` npm run start` When it’s done spinning, it will give you an address that you can use to view your page on the browser. In most cases, the address is ➜ Local: http://localhost:3000/ ## jsx React codes are written in JavaScript. Though some of the codes will appear as html codes, they are actually jsx codes. Jsx is a special type of JavaScript code for situations like this. This jsx codes will be converted to HTML by the babel transpiler. ## React components React components are autonomous segment of content. They usually contain their own logic and html like templates. For example, on a web page, the header, footer, segments and articles could be reusable separate components. To create a react component; ``` import "./App.css"; function App() { return ( <> <div className="container"> <h1>This is my first react App</h1> </div> </> ); } export default App; ``` Declare a function with the name of the function starting with a capital letter, then use the return keyword to return the jsx code. Then export the component so it can be accessible to other components or the root page. Inside the return statement, the jsx codes wrapped in () will be rendered on the browser as html. The above code will be rendered in the browser like so; ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sw7stp884aja48qmcq34.png) The import code below was used to link the page to a css file which is responsible for the style you see on the above image. `import "./App.css";` ## Fragment In each component, only one root element can be rendered. Hence, having code like the one below, will throw an error. ``` return ( <div className="container"> <h1>This is my first react App</h1> </div> <div> another div </div> ); ``` This is usually avoided by wrapping the entire containing elements in a fragment tag. The fragment tag does not add extra HTML element to the DOM. It can be written as ``` return ( <Fragment> <div className="container"> <h1>This is my first react App</h1> </div> <div>another div</div> </Fragment> ); ``` But it is usually written as ``` return ( <> <div className="container"> <h1>This is my first react App</h1> </div> <div>another div</div> </> ); ``` ## Outputting dynamic values jsx allows us to output data in dynamic form without needing to hard code the data. To output dynamic data, you can create a variable inside the function, before the return keyword. You can also write any JavaScript code you wish to use in this space. To output the variable, we wrap the variable name in curly braces at the part of the jsx we want it outputted. ``` function App() { const myLocation = "I live in Abuja"; return ( <> <div className="container"> <h1> {myLocation} </h1> </div> </> ); } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zkkzbkhdqwya8gyqagum.png) With this format, we can output as many values as we desire in our component. ``` import "./App.css"; function App() { const myLocation = "Abuja"; const myName = "Kemi Owoyele"; const myJob = "I am a software developer"; return ( <> <div className="container"> <h1>my name is {myName}</h1> <h1>I Live in {myLocation} </h1> <h1>{myJob}</h1> </div> </> ); } export default App; ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v2ysmmya9xryy4vp0vnc.png) You can output an array with this method; the only difference is that it will be converted to a string. ``` function App() { const myLocation = "Abuja"; const myName = "Kemi Owoyele"; const myJob = "I am a software developer"; const skills = ["CSS ", "HTML ", "JS ", "REACT"]; return ( <> <div className="container"> <h1>My name is {myName}</h1> <h1>I Live in {myLocation} </h1> <h1>{myJob}</h1> <h1>My skills are {skills}</h1> </div> </> ); } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v1ixnkjc4rbbqy8mpunx.png) The exception however, is with objects and Booleans. ``` function App() { const aboutMe = {name: 'Kemi Owoyele',job: 'software developer', location: 'Abuja'}; return ( <> <div className="container"> <h1> {about Me}</h1> </div> </> ); } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/adtmrq99fco95ze1mu8k.png) We can also output dynamic values directly in the curly braces; ``` function App() { return ( <> <div className="container"> <h1> {5 + 10 / 2}</h1> </div> </> ); } ``` ![10](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/096wu6d26qivszgqztgg.png) We can also use dynamic values as HTML attributes. ## Multiple components A component is a function that returns a jsx template and is exported at the bottom of the file. A react app is made up of a component tree with the root component as the origin point of the other components. The root component is like the home page to which other components link. Not all components must link directly to the root component. ![component tree](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7507gzrls3j34xkbw3zm.png) The root component as given to us by the create react app tool is the App.js component. This is the component that is rendered in the main.js or index.js file. ``` import React from 'react' import ReactDOM from 'react-dom/client' import App from './App.js' import './index.css' ReactDOM.createRoot(document.getElementById('root')).render( <React.StrictMode> <App /> </React.StrictMode>, ) ``` To create our own component, say the Navbar component for example. We will need to create a new file in the source folder and name it Navbar.js. In large projects, you may prefer to create a components folder, and in that folder you can have your Navbar.js file. The name of the file is usually written with uppercase as the first character. In that file, create the component function as we showed above, and do not forget to export the function at the bottom. The first character of the name of the component function is also to be in uppercase. Navbar.js file ``` function Navbar() { return ( <> <nav> <div>React App</div> <div className="nav-links"> <a href="/">Home </a> <a href="/">Contact </a> <a href="/">About </a> </div> </nav> </> ); } export default Navbar; ``` Then import the component to the App component using the import statement and render the component where you want it to be. To render the component, you will write the name of the component like an HTML tag. Ensure that the tag is closed properly or you will get an error. You may use a self-closing tag though. App.js file ``` import Navbar from "./Navbar"; function App() { return ( <> <Navbar /> </> ); } export default App; ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y526tl361fy1dq1908x5.png) You can still create another component and nest it in the Navbar component. Just follow the same procedure. ## Adding CSS style To use a CSS file with react, import css file to the js page you want it to refer to. It is common practice to have separate CSS files for each component. This is not absolutely necessary because whether a file is imported to one component does not mean that the styles are scoped to that component. `import "./nav.css";` For small project like the one we are building, we may as well stick to the index.css file. Index.css file came with the create react app tool, and has already been imported to the main.jsx file. **Add inline styling** To add inline styling in react, a couple of rules apply; • Rather than write the styles in quotes like you would have done with inline html, you will use double curly braces. ` <nav style={{}}> `• The style values are writing inside quote like so; `color: "#99023c" ` • Style properties like background-color that is separated with – sign are camel cased instead as javascript will mistake the – as a minus operator. ` backgroundColor: rgb(250, 169, 230), ` • Styles are separated with comma instead of semi colon as done with inline HTML ``` style={{ color: "#99023c", backgroundColor: rgb(250, 169, 230), width: "100vw", }} ``` ## Click events React support different types of events available in javascript. To handle a click event, create the handleClick function or whatever name for the function inside the component function, but above the return statement. Then you can reference the function in your jsx. Take note not to invoke the function with (), because it will be automatically invoked by react before the click. You are only to reference the function. ``` const Home = () => { const handleClick = () => { console.log("You clicked me"); }; return ( <> <div className="home">this is the home component</div> <button onClick={handleClick}>Click Me</button> </> ); }; export default Home; ``` If however, you intend to insert a parameter to the handle click reference, you will have to wrap your reference in an anonymous function. That way you can pass your parameters without hassle. ``` const Home = () => { const handleClick = (num) => { console.log(`you get ${num} score per click`); }; return ( <> <div className="home"> <h3>this is the home component</h3> <button onClick={() => { handleClick(5); }} > Click Me </button> </div> </> ); }; export default Home; ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9gyq29938uivb5k3h2z5.png) ## Using state (usestate hook) State of a component basically means the data being used by that component at a particular point in time. Why state management is an issue in react is that react does not watch changes in variables for re-rendering. Rather in react data should be immutable. To make those variables reactive and renderable with react, we use the useState hook. To have access to the useState hook, you go to the top of your component page, before the function and import useState from react. `import { useState } from "react";` The useState is a function, and we can use our initial value as its argument. To have the opportunity to change that value, we have to set the array of the name of the variable and the function for changing that variable to be = useState function. To do this we would destructure the array. `const [number, setNumber] = useState(0);` the setState function may then be used to manipulate the data as desired ``` import { useState } from "react"; const Home = () => { const [number, setNumber] = useState(0); const handleClick = () => { setNumber(number + 1); }; return ( <> <div className="home"> <button onClick={handleClick}>Click Me</button> <h1>you have clicked {number} times</h1> </div> </> ); }; export default Home; ``` The value of the state could be any data type. ` const [number, setNumber] = useState( 0 );` ## Outputting Lists Sometimes, we may need to display a list of objects; we can achieve that by hard coding ul, li items and so. But if the list is expected to be dynamic, and the state is eventually going to be changed, that means hard coding the list would be a bad choice. To render the list, set the initial variable as the initial argument for the useState function. ``` const [products, setProducts] = useState([ { name: "Product 1", description: "lorem 123...", price: "$3.90", id: 1 }, { name: "Product 2", description: "lorem 123...", price: "$4.90", id: 2 }, { name: "Product 3", description: "lorem 123...", price: "$.90", id: 3 }, ]); ``` Then we can use the map() method to loop through the array and render the outcome. ``` import { useState } from "react"; const Home = () => { const [products, setProducts] = useState([ { name: "Product 1", description: "lorem 123...", price: "$3.90", id: 1 }, { name: "Product 2", description: "lorem 123...", price: "$4.90", id: 2 }, { name: "Product 3", description: "lorem 123...", price: "$.90", id: 3 }, ]); return ( <> <div className="home"> <ul> {products.map((product) => ( <li className="product-list" key={product.id}> <h5> {product.name}</h5> {<p> {product.price}</p>} </li> ))} </ul> </div> </> ); }; export default Home; ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fqj6ijyk5rywxovi6n9.png) ## Key Attribute The key attribute is used to identify a particular item on the list. It is useful, in case you need to update or delete a particular item, the key value will be the deciding factor for selecting the specific one to be affected. The key value must unique; no two items should have the same key. ## React props Single page applications are popular for the advantage of reusing components. Props are used to make react components reusable. Props are used to pass in data into the reusable components. To make our product list component reusable, we want to pull it out to a separate component, and render it in the home component. First, we will create a new file; ProductList.js in the src folder. In the file, create your basic component structure, don’t forget to export default. ``` const ProductList = () => { return ( <> <div className="products"></div> </> ); }; export default ProductList; ``` Next, cut the codes for mapping through the list items and place them inside your return statement in the Product list component ``` const ProductList = () => { return ( <> <div className="products"> <ul> {products.map((product) => ( <li className="product-list" key={product.id}> <h5> {product.name}</h5> {<p> {product.price}</p>} </li> ))} </ul> </div> </> ); }; export default ProductList; ``` Ihen in the home component, import the ProductList component at the top, and nest the component where you want it to be rendered. ``` import { useState } from "react"; import ProductList from "./ProductList"; const Home = () => { const [products, setProducts] = useState([ { name: "Product 1", description: "lorem 123...", price: "$3.90", id: 1 }, { name: "Product 2", description: "lorem 123...", price: "$4.90", id: 2 }, { name: "Product 3", description: "lorem 123...", price: "$.90", id: 3 }, ]); return ( <> <div className="home"> <ProductList /> </div> </> ); }; export default Home; ``` at this stage, there will be an error because the map method in the ProductList component cannot access the products array in the Home component. This is where props comes to the rescue. They are used to pass data from one component to another. Where the component is nested in the Home component, we will have to pass in the props property as an attribute. ` <ProductList products={products} />` Then we will accept the property in our ProductList component as an argument to the component function. `const ProductList = ({ products }) => {` Note that the argument was destructured, if there were more than one props, you can write them there separating them with comma. `const ProductList = ({ products, title }) => {` ` <ProductList products={products} title="Available Products" />` products’s value is written in curly braces because it is a dynamic value, while title is written in quotes because it is a direct string. If we had not detructured the argument, it would have been written as; `const ProductList = (props) => {` Then the properties we intend to pass through the props would be declared afterwards; `const products = props.products; const name = props.name; ` Complete codes for both components will be **Home component** ``` import { useState } from "react"; import ProductList from "./ProductList"; const Home = () => { const [products, setProducts] = useState([ { name: "Product 1", description: "lorem 123...", price: "$3.90", id: 1 }, { name: "Product 2", description: "lorem 123...", price: "$4.90", id: 2 }, { name: "Product 3", description: "lorem 123...", price: "$.90", id: 3 }, ]); return ( <> <div className="home"> <ProductList products={products} title="Available Products" /> </div> </> ); }; export default Home; ``` **ProductList component** ``` const ProductList = ({ products, title }) => { return ( <> <div className="products"> <h1>{title}</h1> <ul> {products.map((product) => ( <li className="product-list" key={product.id}> <h3> {product.name}</h3> {<p> {product.price}</p>} </li> ))} </ul> </div> </> ); }; export default ProductList; ``` **Outcome** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hul8u3271fspdt5y3mi1.png) You can apply your own CSS as you please. ## Reusing components We will like to create another component that will display only goods that cost less than $10. It will be counterproductive to do this by creating another component for this purpose, especially since we would still want to display the same information in the same format. To achieve this, we will pass in the props to filter out products that cost more than $10. We can declare a cheapProducts variable above the return statement; and make the variable equal to the filtered array, then pass cheapProducts as a prop value ` const cheapProducts = products.filter((product) => product.price < 10);` In our home component, we can pass in the props with cheapProducts. ``` <ProductList products={cheapProducts} title="Products less than $10.00" /> ``` The full code on the home component ``` import { useState } from "react"; import ProductList from "./ProductList"; const Home = () => { const [products, setProducts] = useState([ { name: "Product 1", description: "lorem 123...", price: 5.45, id: 1 }, { name: "Product 2", description: "lorem 123...", price: 9.99, id: 2 }, { name: "Product 3", description: "lorem 123...", price: 20.0, id: 3 }, ]); const cheapProducts = products.filter((product) => product.price < 10); return ( <> <div className="home"> <ProductList products={products} title="All Products" /> <div> <ProductList products={cheapProducts} title="Products less than $10.00" /> </div> </div> </> ); }; export default Home; ``` the ProductList.jsx file has remained untouched **outcome ** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnaf0encnv1upy3gx0o7.png) You can render this component in as many parent components as possible, what will make a difference will be the value of the props. ## Delete item We want to generate a click event that we can use to delete an item from the array. Create a button with a click event handler in the ProductList component, inside the li. Do not forget to wrap the function in an anonymous function so we can pass in arguments. ` <button onClick={() => handleDelete(product.id)}>Delete</button> ` We would need to create a function to handle the click event. But the function would be better if created in the parent home component where we can use the useState hook to update the component. create a new variable inside the handleDelete function to hold the filtered arrays. Then to make it reactive, use as an argument in the useState function which in this case is setProducts. ``` const handleDelete = (id) => { const newProducts = products.filter((product) => product.id !== id); setProducts(newProducts); }; ``` What we will do next is to pass in handle delete as a prop in both the parent and child components as appropriate. The codes for both pages will go like so; Home page ``` import { useState } from "react"; import ProductList from "./ProductList"; const Home = () => { const [products, setProducts] = useState([ { name: "Product 1", description: "lorem 123...", price: 5.45, id: 1 }, { name: "Product 2", description: "lorem 123...", price: 9.99, id: 2 }, { name: "Product 3", description: "lorem 123...", price: 20.0, id: 3 }, ]); const handleDelete = (id) => { const newProducts = products.filter((product) => product.id !== id); setProducts(newProducts); }; return ( <> <div className="home"> <ProductList products={products} title="All Products" handleDelete={handleDelete} /> </div> <div> </div> </> ); }; export default Home; ``` ProductList component ``` const ProductList = ({ products, title, handleDelete }) => { return ( <> <div className="products"> <h1>{title}</h1> <ul> {products.map((product) => ( <li className="product-list" key={product.id}> <h3> {product.name}</h3> <p> {product.price}</p> <button onClick={() => handleDelete(product.id)}>Delete</button> </li> ))} </ul> </div> </> ); }; export default ProductList; ``` output ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1rxhu78eph7j3bx1baqf.png) Clicking on the delete button will remove the particular product from the rendered array. note that the original array is still unmutated. What was rendered here is the newProduct array that we called with setState hook. ## Summary React is a front-end library for building single page applications. Only the index.html file will be sent from the server, while react handles every page loads from the client side. React makes use of reusable components to render content to the DOM. These components contain their own logic and html template that will be exported for use by other parent components. State in react refers to data that change within the component over time. And in react, we cannot use document.get to interact with the DOM, We also cannot change variable values in hope that react will rerender the new values. Hence we use the useState hook to handle and manage state in the application. Props are used to make components reusable. To access data from a parent component to child components you have to pass the data as props. We can also pass in functions as props to perform various operations.
kemiowoyele1
1,875,361
Code With Heroines : SSL && Unification of China under the Han Dynasty
Explanation This represents the official discipline of Confucian studies introduced by...
0
2024-06-03T12:45:03
https://dev.to/fubumingyu/code-with-heroines-ssl-unification-of-china-under-the-han-dynasty-3l0k
ssl, aiart, han
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/59nn0y52mdkjou1usjly.png) ## Explanation This represents the official discipline of Confucian studies introduced by Emperor Wu, emphasizing the importance of secure and reliable knowledge, much like how SSL ensures secure data transmission. ## Unification of China under the Han Dynasty The Han Dynasty was unified under the rule of Liu Bang (202-195 BC), the Han Dynasty's founder, instead of the Qin Dynasty. Rather than implementing drastic reforms, Liu Bang combined the county and prefecture system with the feudal system, initially appointing lords from his family and meritorious vassals. However, family members soon replaced these lords, leading to the Wu-Chuzhou Rebellion in 154 BCE, which the government suppressed, consolidating its power. A centralized government was established during Emperor Wu's reign (141-87 BCE). Emperor Wu introduced a nomination-based system for appointing officials and made Confucian studies an official discipline. He pursued aggressive foreign policies, defeating the Xiongnu and expanding territory into northern Vietnam and the Korean peninsula. Despite an unsuccessful alliance with the Dajue clan in Central Asia, his efforts opened the way for managing the western regions. ## What is SSL? SSL (Secure Sockets Layer) is a mechanism for encrypting communications between the server that stores a website and the browser. The most important reason for installing SSL is to protect the information of website users. Suppose personal details or credit card information is transmitted without encryption. In that case, there is a risk of being intercepted and misused, so SSL is necessary to ensure that users can use the website with peace of mind. In addition, from the SEO perspective, there is a growing movement toward the introduction of SSL, and although it has been adopted as a ranking factor and warnings have been displayed for sites without SSL (non-https) for some time, from October 2023, “http” will be automatically changed to “https” on browsers such as Chrome HTTPS first mode was applied. At first glance, SSL seems secure by itself when it is installed, but there are pitfalls. As I mentioned, SSL is an encryption mechanism between the server and the browser; it does not guarantee the website's security that the user accesses. It is not uncommon to find phishing sites with SSL installed. ## Reference - [SSLとは?必要な理由を説明します](https://rs.sakura.ad.jp/column/rs/whats-ssl/?gad_source=1&gclid=Cj0KCQjwu8uyBhC6ARIsAKwBGpRkx1dblwmmGKgnNzElneV6-RjzPENWLa3LyYpdHsK6oddO3I6HU2EaAtaqEALw_wcB#ssl%e3%81%a8%e3%81%af%ef%bc%9f)
fubumingyu
1,875,358
Discover Code Hunts: Pakistan's Premier Coding Hub
Introduction In the digital age, coding skills are a gateway to numerous opportunities....
0
2024-06-03T12:45:00
https://dev.to/hmzi67/discover-code-hunts-pakistans-premier-coding-hub-16p8
webdev, javascript, beginners, programming
### Introduction In the digital age, coding skills are a gateway to numerous opportunities. Enter [Code Hunts](https://codehuntspk.com/), Pakistan's leading platform for aspiring and seasoned developers. With a mission to cultivate and nurture coding talent, Code Hunts provides a comprehensive suite of resources, challenges, and community engagement to empower Pakistan's tech enthusiasts. ### What is Code Hunts? Code Hunts is an innovative online platform dedicated to enhancing coding skills through practical experience. It offers a range of coding challenges, tutorials, and competitions designed to cater to various skill levels—from beginners to advanced programmers. The platform emphasizes hands-on learning, encouraging users to apply theoretical knowledge to real-world problems. ### Features and Offerings #### 1. **Coding Challenges** Code Hunts presents a vast array of coding problems, regularly updated to reflect current industry trends. These challenges help users improve their problem-solving abilities and proficiency in languages such as Python, Java, JavaScript, and more. #### 2. **Tutorials and Courses** Comprehensive tutorials and courses cover fundamental to advanced topics in programming. These educational materials are crafted by industry experts, ensuring users gain practical insights and up-to-date knowledge. #### 3. **Competitions** Regularly hosted coding competitions provide a platform for users to test their skills against peers, fostering a competitive spirit and offering opportunities to win exciting prizes. These events are pivotal in preparing participants for national and international coding contests. #### 4. **Community and Collaboration** Code Hunts boasts a vibrant community of developers. Through forums and discussion boards, users can seek advice, share knowledge, and collaborate on projects, creating a supportive environment conducive to learning and growth. #### 5. **Career Development** With a focus on career advancement, Code Hunts offers resources like resume-building workshops, interview preparation sessions, and connections with top tech companies. These initiatives help bridge the gap between education and employment, guiding users towards successful tech careers. ### Why Choose Code Hunts? Code Hunts stands out due to its commitment to excellence and user-centric approach. By providing diverse learning pathways, fostering a collaborative community, and maintaining high standards of content quality, Code Hunts has become an essential resource for Pakistan's tech community. ### Conclusion As Pakistan continues to make strides in the tech industry, platforms like Code Hunts play a crucial role in shaping the next generation of developers. Whether you're starting your coding journey or looking to sharpen your skills, Code Hunts offers the tools, resources, and community support you need to succeed. Join Code Hunts today and be part of Pakistan's coding revolution.
hmzi67
1,875,357
Outdoor Home Improvement & Indoor Home Improvement
Discover comprehensive home and building solutions in Kochi tailored to your needs.We offers building...
0
2024-06-03T12:43:33
https://dev.to/aydins_world_2fa5da18868/outdoor-home-improvement-indoor-home-improvement-4pag
Discover comprehensive home and building solutions in Kochi tailored to your needs.We offers building construction,Flooring,Soffit Ceiling Installation,WPC Paneling etc. Professional Home and Building Solutions in Kochi. Trust our experts to deliver exceptional results. Contact us for a consultation building solutions Kochi ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w0qpdbz6eg8vb8ik2s8v.jpg)
aydins_world_2fa5da18868
1,875,356
How Android App Developers Can Improve Inventory Tracking
Warehouses play a significant role in a logistics business, but chaos reign supreme in most of them....
0
2024-06-03T12:42:51
https://dev.to/deerapandey/how-android-app-developers-can-improve-inventory-tracking-135j
android, webdev, programming, productivity
Warehouses play a significant role in a logistics business, but chaos reign supreme in most of them. Boxes will be overflowing, aisles may look like obstacle courses, and searching hurriedly for misplaced stocks was a common scene. Every order was followed by warehouse staff struggling to open boxes, shelves being scaled at a dangerous speed, with yelling and cursing echoing in the warehouse. Confusion prevailed in the stockroom, and tracking inventory was like a never-ending game of hide-and-seek. Orders piled up because packages were misplaced, and employees spent hours running between shelves. Finding the right stocks for every order resembled a frantic treasure hunt. Manual inventory tracking processes are done using spreadsheets. However, they are prone to errors and time-consuming. Inventory is an integral part of the logistics business. Inventory management prevents stockouts and overstocking. Inventory management includes the processes in manufacturing, storage, and distribution of goods. It ensures the availability of the right goods at the right place and at the right time. Common problems faced in inventory management are overselling a product and running out of inventory. Excessive inventory ties up capital and increases storage and carrying costs. And stock outs can lead to missed sales opportunities and dissatisfied customers. ## Need for Software Solution An inventory management software stores information about the products stored at multiple facilities. It has details of each product, such as its quantity, location in the warehouse and expiration date, if applicable. Building software for inventory management will also help in analyzing the seasonal and historical data and predict the orders. An app to manage inventory can track processes right from the point of manufacturing to sales. It helps businesses to monitor inventory levels, orders, and deliveries, create accurate reports, and manage processes from a single location. The results help businesses to make better decisions and increase their profits. ## Why Should You Build Android Apps? Today, mobile phones have become an essential part of people's lives. The mobile app market has grown rapidly in recent years, with a constantly increasing number of people using smartphones for day-to-day tasks. Logistics businesses can leverage the power of these phones and tablets. By building Android apps for inventory management, businesses have the potential to reach a vast number of people. Android is an open-source platform. Android’s user interface is highly customizable, which means that developers can create unique and engaging user experiences. Developers can modify the code at any time to meet the evolving needs of a business. They can create custom features depending upon the requirements. Android app development is relatively simple and cost-effective because developers have access to a wide range of tools and resources. They can utilize a range of programming languages, including Java, Kotlin, and C++. Kotlin is popular among developers due to its simplicity, conciseness, and interoperability with Java. Employees can have the app on their smartphones and utilize it to scan barcodes. The app helps to access real time data about inventory, get updates about stock levels from anywhere, and get alerts on stock outs. The entire inventory can be viewed at any time without the hassle of searching for information on numerous spreadsheets. Apps hosted in the cloud may be accessed from any location and help companies manage every aspect of their business from one place. They are designed to make the inventory management process efficient and straightforward for businesses. ## Benefits of Inventory Management App - Real-time tracking of inventory items - Effective inventory control - Better inventory planning and ordering - Creation of customized reports - Stock management from one central location - Accessible on any mobile device - Cloud-based for easy access from anywhere - Better customer service - Reduce product loss from theft, spoilage and returns Many businesses have turned to inventory management apps to get quick updates on inventory and make better decisions. The global inventory management software [market size](https://www.fortunebusinessinsights.com/inventory-management-software-market-108589) was valued at $2.13 billion in 2023 and is projected to grow from $2.31 billion in 2024 to $4.84 billion by 2032. ## Build Custom Android Apps With effective inventory management, businesses can reduce the chances of expired stock, prevent stockouts, and minimize disruptions in the supply chain. The Android apps for inventory tracking and management can be effortlessly installed on smartphones to reduce stock outs and fulfill orders quickly. Poor inventory management is a reason for logistics businesses to lose a chunk of their profit. Eliminate maintaining records on paper and spreadsheets and embrace the mobile revolution. Your warehouse and your customers will thank you for it. Hire Android app developers skilled in the latest versions of Android, programming languages, and APIs. Build full-fledged customized Android apps to track and manage inventory and enjoy the benefits of security, stability, and superior performance. Contact us to [**hire android developers**](https://www.isquarebs.com/hire-dedicated-developers/) to build apps that meet the demands of your logistics business.
deerapandey
1,875,355
Como Excluir Cursos no Google Classroom Usando a API do Google
Neste artigo, vamos documentar como usar a API do Google Classroom para listar e excluir cursos....
0
2024-06-03T12:42:48
https://dev.to/madrade1472/como-excluir-cursos-no-google-classroom-usando-a-api-do-google-255p
Neste artigo, vamos documentar como usar a API do Google Classroom para listar e excluir cursos. Vamos detalhar o passo a passo necessário para configurar e executar o código Python que realiza essas operações. **Passo a Passo** **Autenticação:** O código verifica se o arquivo token.json existe para utilizar credenciais armazenadas previamente. Caso contrário, ele inicia o fluxo de autenticação OAuth 2.0. **Listar Cursos:** O código chama a API do Google Classroom para listar todos os cursos disponíveis. **Excluir Curso:** O código solicita o ID do curso a ser excluído e chama a API para excluir o curso especificado. **Execução** Execute o Script: Execute o script Python. Autenticação Inicial: Se for a primeira execução, você será redirecionado para uma página de login do Google para conceder permissões ao aplicativo. Listar Cursos: Após a autenticação, o script listará todos os cursos disponíveis. Excluir Curso: Insira o ID do curso que você deseja excluir. ## Pré-requisitos 1. **Conta do Google:** Certifique-se de ter uma conta do Google. 2. **Google Cloud Project:** Crie um projeto no [Google Cloud Console](https://console.cloud.google.com/). 3. **API do Google Classroom:** Ative a API do Google Classroom para o seu projeto. 4. **Credenciais OAuth 2.0:** Configure as credenciais OAuth 2.0 para o seu projeto. ## Configuração do Ambiente 1. **Instale as bibliotecas necessárias:** Execute o comando abaixo para instalar as bibliotecas necessárias. ```bash pip install google-auth google-auth-oauthlib google-auth-httplib2 google-api-python-client ``` 2. **Crie um arquivo `credentials.json`:** Baixe as credenciais OAuth 2.0 do Google Cloud Console e salve-as como `credentials.json` no diretório do seu projeto. ## Código Python Aqui está o código completo para listar e excluir cursos no Google Classroom: ```python import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # Escopos necessários para listar cursos, adicionar alunos e professores, criar e excluir cursos no Google Classroom SCOPES = [ "https://www.googleapis.com/auth/classroom.courses", "https://www.googleapis.com/auth/classroom.courses.readonly", "https://www.googleapis.com/auth/classroom.rosters" ] def main(): """ Código usado para excluir um curso do Google Classroom. """ creds = None if os.path.exists("token.json"): creds = Credentials.from_authorized_user_file("token.json", SCOPES) # Vai pedir autenticação if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( "credentials.json", SCOPES ) creds = flow.run_local_server(port=0) # Salva as credenciais e mostra os cursos with open("token.json", "w") as token: token.write(creds.to_json()) try: service = build("classroom", "v1", credentials=creds) # Lista todos os cursos disponíveis list_courses(service) # Solicita o ID do curso a ser excluído course_id = input("Digite o ID do curso a ser excluído: ") delete_course(service, course_id) except HttpError as error: print(f"An error occurred: {error}") def list_courses(service): try: # Chama a API Do classroom para ver os cursos results = service.courses().list(pageSize=1000).execute() courses = results.get("courses", []) if not courses: print("No courses found.") return # Mostra na tela o nome dos cursos criados e salas print("Courses:") for course in courses: print(f'{course["name"]} (ID: {course["id"]})') except HttpError as error: print(f"An error occurred: {error}") def delete_course(service, course_id): try: service.courses().delete(id=course_id).execute() print(f"Course with ID {course_id} deleted successfully.") except HttpError as error: print(f"An error occurred: {error}") if __name__ == "__main__": # Apagar o arquivo token.json para forçar a reautenticação if os.path.exists("token.json"): os.remove("token.json") main()
madrade1472