id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,469,592 | Navigating Web Analytics: Tracing the Evolution and Introducing Zaraz | Originally posted here:... | 0 | 2023-05-16T07:25:01 | https://dev.to/imtiaz101325/navigating-web-analytics-tracing-the-evolution-and-introducing-zaraz-4o4o | zaraz, cloudflare, analytics, seo | Originally posted here: https://imtiaz101325.notion.site/Navigating-Web-Analytics-Tracing-the-Evolution-and-Introducing-Zaraz-d472aad2e60547c89bf13066e220170e
The digital realm is a fast-paced, evolving landscape. Significant contributors to this evolution include third-party utilities, Tag Management Systems (TMS), and Customer Data Platforms (CDP), which have fundamentally altered our approach to web analytics. Amid this ongoing transformation, one groundbreaking solution emerges - Zaraz, a server-side tag manager that operates at the edge and is redefining the web analytics domain.
In this blog post, we'll cover:
- The emergence and impact of third-party utilities in web analytics.
- The advent and significance of Tag Management Systems (TMS) and Customer Data Platforms (CDP).
- An introduction to Zaraz, a groundbreaking server-side tag manager, and its unique features.
- The current state of third-party tools, their implications, and future trends.
- A look towards the future of TMS and CDP technologies.
## **The Dawn of Third-Party Utilities**
In the realm of marketing automation and analytics, third-party utilities such as [Google Analytics](https://analytics.google.com/analytics/web/) and [Facebook Pixel](https://www.facebook.com/business/tools/meta-pixel) have secured a substantial foothold. These tools employ 'tags', or code fragments, embedded within each webpage. Especially prevalent on traditional server-rendered pages, these tags facilitate a range of tracking and monitoring functions.
However, as technology advanced, managing the escalating volume of tags on web pages became a significant challenge. This situation was particularly evident around 2010 when tech teams started wrestling with the bottleneck created by multiplying tags. This situation led to the advent of [Tag Management Systems](https://en.wikipedia.org/wiki/Tag_management_system) (TMS), which empowered marketing teams to independently inject tags, scripts, and more into their web pages, thereby promoting efficiency and agility.
Stepping into the limelight was [Segment](https://segment.com/), a venture backed by Y Combinator, which caused a seismic shift in the industry by creating [Analytics JS](https://github.com/segmentio/analytics.js/), a unified API consolidating event collection procedures from esteemed analytics tools like JTrack, [Facebook Track](https://developers.facebook.com/docs/meta-pixel/implementation/conversion-tracking/), and [Mixpanel Track](https://developer.mixpanel.com/reference/track-event). This innovation marked the advent of [Customer Data Platforms](https://segment.com/resources/cdp/) (CDPs), the subsequent phase in tag management software evolution. CDPs allowed organizations to derive actionable intelligence from their data.
## **Zaraz: The Groundbreaking Server-Side Tag Manager**
[Zaraz](https://www.cloudflare.com/products/zaraz/) is a server-side tag manager, running on the edge, that allows users to manage their tags and track user data in real-time. Another [Y Combinator](https://www.ycombinator.com/) [success](https://www.ycombinator.com/companies/zaraz), it was acquired by [Cloudflare](https://www.cloudflare.com/), an internet company that provides CDN, cybersecurity, and DDoS mitigation services, and is now part of Cloudflare's suite of products. Zaraz offers a resilient, adaptable TMS that features the developer experience of CDPs and aligns with [Data Protection Authorities](https://commission.europa.eu/law/law-topic/data-protection/reform/what-are-data-protection-authorities-dpas_en) (DPAs). This tool provides a streamlined tag management process and can integrate with any third party tools without significantly affecting page speed. Zaraz sets itself apart with its state-of-the-art computing capabilities. It blends the best features of TMS and CDP, offering users the ease of a dashboard and a unified API for an elevated [Developer Experience](https://www.getclockwise.com/blog/what-is-developer-experience) (DX).
Zaraz sets itself apart with its state-of-the-art computing capabilities on top of providing a streamlined tag management process. Zaraz allows you to control what the scripts you insert on your page do, ensuring they don't have access to anything they aren't authorized to see. This way, even if a third-party script provider is compromised, it won't result in a security incident.
A standout feature of Zaraz is its integration with [Cloudflare's Workers Platform](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/), offering a Data Localization Suite that aids clients in maintaining GDPR compliance. This feature enables clients to choose specific regions to run their software and ensures no superfluous data logging occurs.
Key aspects that set Zaraz apart:
- It capitalizes on the [Cloudflare network](https://www.cloudflare.com/network/), operating as closely to the user as possible.
- Cloudfare customers enjoy a substantial advantage.
- Utilizes [HTML Rewriter](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/) to auto-integrate Zaraz.
- Operates on its own domain as a first-party tool.
- Utilizing workers allows it to offload client-side evaluations.
## **Third-Party Tools Statistics: A Numbers Game**
[An intriguing statistic](https://almanac.httparchive.org/en/2021/third-parties?ref=blog.cloudflare.com#fig-4) shows that 94% of global mobile pages host at least one third-party utility. Larger websites average around 43 tools. This heavy reliance on third-party tools can cause significant performance lags, particularly on mobile devices. Considering the substantial influence of page speed on SEO ranking and conversion rates, internet companies are now prioritizing [Core Web Vitals](https://web.dev/learn-core-web-vitals/), especially those focusing on consumers like e-commerce, publishers, and direct-to-consumer websites.
Data privacy concerns are also escalating, with DPAs restricting the use of third-party tools due to privacy issues associated with sharing users' IP addresses. This trend is increasingly apparent not only in Europe but also in the US and Asia. Moreover, the future of advertising is transitioning towards a model devoid of third-party cookies. As browsers prohibit the use of third-party cookies, companies dependent on advertising are moving towards technologies that support this new model.
Acknowledging user concerns about the potential influence of third-party tools on their websites, the Zaraz team developed a revolutionary solution - the first-ever TMS/CDP that loads tools at the edge, away from the user's browser. This inventive, cloud-focused strategy offers numerous benefits. By relocating the tools to the cloud, websites can sustain peak performance without being overloaded by the plethora of third-party tools. Furthermore, this method ensures a higher degree of control over data transmission, mitigating potential GDPR compliance issues related to transmitting users' personal data abroad.
As we gaze into the future of TMS and CDP technologies, the evolution of these tools points towards continuous enhancements in data management and analytics. With Zaraz at the forefront of this evolution, businesses can anticipate amplified control, flexibility, and performance in managing their website's tags and customer data.
## **Conclusion**
The trajectory of web analytics and tag management systems underscores the dynamic nature of the digital universe. From the emergence of third-party tools, to the ascendance of CDPs, and the unveiling of cloud-centric TMS like Zaraz, we are on an exhilarating path of innovation and adaptation. This journey is fueled by an unwavering dedication to efficiency, performance, and data privacy. These elements are vital for SEO experts striving to optimize their clients' digital footprint. As Zaraz continues to revolutionize the way users manage their tags, it emphasizes the importance of innovation and adaptation in the swiftly changing landscape of web analytics. | imtiaz101325 |
1,469,913 | How Top Enterprises Embrace Blockchain Technology to Improve Their Business Operation | Here are the top enterprises implementing blockchain into their businesses to bring more efficiency... | 0 | 2023-05-16T13:32:49 | https://dev.to/cooper_91/how-top-enterprises-embrace-blockchain-technology-to-improve-their-business-operation-p2c | blockchain, javascript, programming | Here are the top enterprises implementing blockchain into their businesses to bring more efficiency to their business operations.
**1, Walmart: Product Tracking**
Retail giant Walmart is one of the multinationals that has taken the lead in Blockchain implementation. The company is leveraging the digital ledger technology to enhance data tracking and management process, in its day-to-day operations. More so, Walmart is working with IBM in what is seen as a perfect example of blockchain implementation in the business of tracking meat and poultry products sold in its outlets.
**2, British Airways: Blockchain Monitoring System**
Britain largest airline British Airways is another example of companies using blockchain. In reality, it has also implemented blockchain technology to manage information about data flights between key airports in London, Geneva and Miami. More so, the airline is also testing a new blockchain-powered VChain Verification Service that has the potential to revolutionize the check-in process.
**3, Maersk: Blockchain Shipping Platform**
Maersk, another example of companies using blockchain, has teamed up with tech giant IBM to launch a platform to monitor the shipping of cargo. Thus, TradeLens is the implemented blockchain platform that is to be used’ to track shipments as they move from one port to another.
**4, Tencent: Blockchain For Legal Billing and Taxation**
Tencent is implementing blockchain technology to enhance legal billing and taxation in the Chinese city of Shenzhen. In reality, the multinational is exploring blockchain for enterprise applications leveraging the technology to help combat fake invoices as well as reduce the case of businesses in the city taking advantage of tax drawbacks
**5, Facebook: Blockchain For Digital Payments**
Facebook has had to come up with new ways of ensuring better protection of people’s data in response to the Cambridge Analytica scandal. Implementing blockchain technology to take care of the huge trove of data appears to be the company’s latest push.
**6, Walt Disney: Blockchain To Track Inventories**
Blockchain implementation in theme park giant Walt Disney is also taking shape. The most obvious use case of the technology in the company’s operations is the tracking of inventories as well as sales and shipments in the parks.
Tech giants are believing that blockchain will give more efficiency and transparency to their businesses, and they start movie towards the future technology if you are looking forward to implementing blockchain into your business but you didn’t know where to start. Then let the experts take care of your business. We are the best **[enterprise blockchain development](https://maticz.com/enterprise-blockchain-development)** company that covers all range of blockchain-related services for your business based on your business requirements.
Maticz offers a wide range of services like **[IDO development services](https://maticz.com/ido-development)**, smart contract development, smart contract audit services, **[DApp development services](https://maticz.com/dapp-development-company)**, real estate NFT development company, token development Company, **[DAO development](https://maticz.com/dao-development-services)**, and white label NFT marketplace development.
Not only that we are also in the blockchain gaming field, like **[Play to earn NFT game development](https://maticz.com/play-to-earn-games-development)**, **[GameFi development services](https://maticz.com/gamefi-development-company)**, and STEPN clone app development.
| cooper_91 |
1,470,080 | Launch Alert! Wowen Modular Blockchain Network - Unleash the Power of Consensus Choice 🚀⛓️💻 | Welcome to Wowen, the world's first modular blockchain network tailored for developers seeking... | 0 | 2023-05-16T15:55:07 | https://dev.to/wowen_network/launch-alert-wowen-modular-blockchain-network-unleash-the-power-of-consensus-choice-4d8k | ethereum, blockchain, polygon, binance | **Welcome to Wowen, the world's first modular blockchain network tailored for developers seeking ultimate control and flexibility.** Hailing from Switzerland, Wowen revolutionizes the Web3 space by allowing developers to select their preferred consensus mechanism for each transaction.
Unleash your blockchain's potential - Choose your consensus per transaction!
Wowen empowers you to optimize your applications for speed, cost, and finality by selecting from a variety of consensus chains such as Ethereum, Binance, Avalanche, Polygon, or Hedera. This cutting-edge offering opens up new avenues for development, with the ability to operate multiple consensus chains concurrently, all within a single network.
We're excited to invite you to the Wowen Alpha release. Be among the trailblazers to test the network and experience this game-changing modular (r)evolution in blockchain technology.
Stay ahead of the curve by following us on [Twitter](https://twitter.com/wowen_network) and becoming part of our developer community on [Telegram](https://t.me/WowenNetwork). For comprehensive developer docs and the latest updates, check out our [Gitbook](https://docs.wowen.io).
Are you ready to redefine your blockchain development experience? Think modular. Join us at [Wowen](https://www.wowen.io)!
-The Wowen Team | wowen_network |
1,470,471 | Getting started using Google APIs: Workspace & OAuth client IDs (2/3) | Introduction Are you a developer but a complete beginner using Google APIs? This series is... | 25,403 | 2023-05-16T21:58:52 | https://dev.to/googleworkspace/getting-started-using-google-apis-workspace-23-3ch8 | python, googlecloud, node, googleapi | <!-- Getting started using Google APIs: Workspace (Part 1/3) -->
## Introduction
Are you a developer but a complete beginner using Google APIs? This series is for you because I'm showing you how get started from scratch. Each API family differs from others, so while it would be _great_ if the UX (user experience) was completely consistent across all APIs, this isn't the case, so my goal is to help developers understand how their use differs between product families. As mentioned in the [previous post](/wescpy/coding-python-and-google-with-wescpy-3bag), I'm starting with the Google Workspace (formerly called _G Suite_ and _Google Apps_) APIs.
Google Workspace ("GWS") refers to the collection of: Gmail, Google Drive, Calendar, Docs, Sheets, Slides, Forms, plus others. Behind each of these well-known apps is a RESTful API. Why GWS APIs first? The main reason is they're completely free (up to certain limits) and **do not require a credit card**. Don't worry though... we'll get to other Google APIs soon thereafter, including: Cloud ("GCP"), Google Maps, YouTube, Firebase, etc., and more interesting, demo apps using APIs from different product families.
Regardless of which Google APIs you use, there are **two** main requirements:
1. **Authentication** — typically login & password; using Google APIs requires a Google account
1. **Authorization** — an application needs permission to call an API & perform data access
A third item you _should_ use is a "client library." While optional, their use is strongly recommended, and I'll cover this topic in the next post. Authentication and authorization are commonly abbreviated as "authn" and "authz", respectively. Because both are long, similar words that start with, "auth," some think they're the same thing, but alas, they're not. Check out [this page](https://developers.google.com/workspace/guides/auth-overview) in the official docs to help differentiate. I'll do the same below as well as guide you to creating the required resources.
## Authentication ("authn") — get a Google account

<figcaption>Authentication: proving you are who you say you are</figcaption>
<br>
Authn refers to _user identity_, typically achieved with a login and password. More secure forms of authn include responding to a smartphone prompt or providing a biometric such as a fingerprint or retina scan. While a Google account is required to use Google APIs, it doesn't have to be a _Gmail account_; Google accounts can be based on an existing email address. Having a Gmail account is only convenient because it combines the two, giving you both a personal email _and_ a Google account, in a single entity.
Another kind of Google account is a Workspace account. You get one when your company or school chooses [Workspace](https://gsuite.google.com) as its office productivity suite. I don't recommend these types of accounts because it is _very_ likely your school or company administrator has disabled developer projects because of billing or security concerns. So just stick with a Gmail or personal Google account.
### Action item(s)
1. Create a new Google account (or select an existing account). Per my earlier remark, you can [create an account **with** a Gmail address](https://accounts.google.com/SignUp) _or_ [**without** Gmail](https://accounts.google.com/SignUpWithoutGmail) using your personal email address. Once you're logged into your new or existing Google account, you're done with authn.
## Authorization ("authz") — create a Google developer project and relevant credentials

<figcaption>Authorization: You may be authenticated, but do you have API/data access?</figcaption>
<br>
Credentials give your app the authorization to access a Google API. Google APIs support three credentials types. Which you use depends on who owns the data that an API accesses, as shown in the following table:
Credential type | Data accessed by API | Example Google API families
--- | --- | ---
API key | Public data | Maps APIs
OAuth client ID | Data owned by (human) users | GWS APIs
Service account | Data owned by apps/projects | GCP APIs
<figcaption>Required credential type determined by API data "owner"</figcaption>
<p> </p>
<br clear=all>
1. **API keys** — used for _APIs that access public data_, e.g., looking for places on Google Maps, sending a picture to the GCP Vision API, searching YouTube for videos, and so on. Because all _Maps APIs_ access public data, expect to only use API keys.
1. **OAuth client IDs** — used for _APIs that access data owned by (human) users_. Your app needs a user's permission to access their documents on Drive, their messages in Gmail, videos in a user's YouTube playlists, and so on. Since Workspace data is almost always personal data, expect to use OAuth client IDs with _GWS APIs_.
1. **Service accounts** — used for _APIs that access data owned by an app or project_. For cloud-based apps, there's no human to prompt for permissions, plus the data isn't owned by humans anyway. Authorization is implicitly granted via service account public/private keypairs created with specific permissions. As you can also guess, this means _GCP APIs_ typically require service accounts.
As mentioned in the [previous post](/wescpy/coding-python-and-google-with-wescpy-3bag), I'm starting with GWS APIs first, meaning using _OAuth client IDs_. Future posts will cover Google APIs that use either _API keys_ or _service accounts_. From the above, while it may seem APIs only accept one type of credentials, note that some Google APIs accept more than one type.
For example, while the [Cloud Vision API](http://cloud.google.com/vision) typically expects service accounts, it may also work with API keys or OAuth client IDs. I created a [sample app featuring the Vision API where it used OAuth client IDs for authz](http://goo.gle/3nPxmlc) — my justification being that the app also uses the Drive and Sheets APIs. It's already tricky enough using multiple Google APIs much less using multiple credentials types too.
> #### OPTIONAL: Complete experience via "codelab"
>
> In the section below, I offlink several times to a tutorial (a "Google codelab") I wrote a few years ago introducing developers to coding with GWS APIs. Codelabs are self-paced, hands-on tutorials that lead you step-by-step in accomplishing a task and/or learning a Google API or API feature. Rather than regurgitating the exact same content in these posts, I'll point to specific sections to review or execute.
>
> The codelab consolidates the content from these (three) posts and implements a Python script that isplays the first 100 files or folders in a user's Google Drive. We'll eventually end up there, but some of you may learn better/faster by doing an end-to-end, immersive project, in which case, feel free to do the entire codelab at <http://g.co/codelabs/gsuite-apis-intro>. (If interested, _all_ Google codelabs can be accessed at <http://g.co/codelabs>.)
Now let's set up your _project_ and allocate credentials for your app. If you're new to Google APIs, a _project_ is the logical container for an application and its resources, and all API usage requires a project. Once you have a project, you need to create credentials for API access. Before executing the instructions below, review [this page](https://support.google.com/cloud/answer/6158849) and [this one](https://developers.google.com/identity/gsi/web/guides/get-google-api-clientid) in the official docs to get an idea of what you'll be doing because we're pretty much going to follow the steps outlined in those docs.
### Action item(s)

<figcaption>Google developer console ("DevConsole")</figcaption>
<br>
1. Go to the Developer Console ("DevConsole") at <https://console.developers.google.com> and sign into your Google account. See the sidebar below if you have multiple Google accounts.
1. At the top of the DevConsole, _create a new project_ (look for buttons labeled **New Project** or **Create Project**), or select an existing project. Specific instructions on project setup can be found [in the codelab](https://codelabs.developers.google.com/codelabs/gsuite-apis-intro/#5).
1. Click on the **Credentials** tab in the left-nav to get to the credentials page listing all credentials for your project. Now click **Create credentials** at the top and select **OAuth client ID**. For **Application type**, select **Desktop app**. This and following steps are also detailed [in the codelab](https://codelabs.developers.google.com/codelabs/gsuite-apis-intro/#6) as well as the docs pages listed at the end of the previous section.
1. OAuth client IDs need a name, so take the default or create something meaningful like, "Google API demo".
1. You may be prompted to create an **OAuth consent screen**; do so and select an **External** app. You only need to provide basic info like app name, email address, etc. Since we're only building a prototype, this info will only be exposed to you as the test user of your script. To learn more about this, see the [OAuth consent screen documentation](https://developers.google.com/workspace/guides/configure-oauth-consent).
1. After the client ID has been created, you'll be prompted to download a JSON credentials file. Rather than a super long default filename like, `client_secret_SOME-GIANT-HASH.googleusercontent.com`, save it with a shorter name, e.g., `client_secret.json` or `client_id.json`. With an active project and downloaded JSON credentials file, your authz tasks are complete.
> #### TIP: More than one Google account?
>
> If you have more than one Google account, select the avatar in the upper-right corner to confirm you're using the desired account. If not, select the correct one in the avatar dialog. When you have multiple Google accounts, an _authenticated user ("auth user") number_ helps your browser differentiate between those users in any Google service URL. This number shows up in the URL in one of these two formats: 1) `...&authuser=N&...` or 2) `.../u/N/...`. In both cases, `N` is the auth user number, starting at 0 (default) and incrementing for each account.
>
> As a convenience, you can add this HTTP field to most Google URLs to select which of your Google accounts to use. For example, use the following URL to access the DevConsole as auth user #2: `console.developers.google.com?authuser=2`.
## Summary
Congrats! You've completed the first step in using Google APIs. Note there was no Python, Node.js, or any language-specific steps today; the process you completed is language-agnostic and API-agnostic, meaning you can use any Google API accepting OAuth client IDs as its authz mechanism, meaning an API that accesses user-owned data.
## What's next?
In the next post, you _will_ be taking more language- and API-dependent steps:
1. Enable GWS APIs you wish to use
1. Install client library for GWS APIs
For those who prefer more visual content, below is a video I made years ago covering much of what's in this post and giving you a preview of the next one. The video's screenshots are outdated now but the functionality has mostly remained the same. Until the next post!
{% youtube DYAwYxVs2TI %}
<figcaption>[VIDEO] Creating new apps using Google APIs: DevConsole intro</figcaption>
<br clear=all>
NEXT POST: [Part 2/3 on working with GWS APIs](https://dev.to/wescpy/getting-started-using-google-apis-workspace-23-3ch8)
## References
Below are links related to the content in this post:
### Hands-on "codelab" tutorials
- [GWS API intro codelab](http://g.co/codelabs/gsuite-apis-intro)
- [_All_ Google codelabs](http://g.co/codelabs)
### OAuth client ID credentials
- [Google Support OAuth client ID info](https://support.google.com/cloud/answer/6158849)
- [Google Developers OAuth client ID info](https://developers.google.com/identity/gsi/web/guides/get-google-api-clientid)
- [OAuth consent screen](https://developers.google.com/workspace/guides/configure-oauth-consent)
- [Working with OAuth client ID credentials and GWS APIs (2/3)](https://dev.to/wescpy/getting-started-using-google-apis-workspace-23-3ch8) post
### GWS APIs general
- [GWS auth overview page](https://developers.google.com/workspace/guides/auth-overview)
- [GWS product home](https://gsuite.google.com)
- [GWS developer home](https://developers.google.com/gsuite)
### Other similar content by the author
- [Creating new apps/projects using Google APIs](http://youtu.be/DYAwYxVs2TI) DevConsole intro video
- [Using GCP APIs with OAuth client IDs](http://goo.gle/3nPxmlc) sample app using GCP APIs (Storage, Vision) & GWS APIs (Drive, Sheets)
---
<small>
**WESLEY CHUN**, MSCS, is a [Google Developer Expert](https://developers.google.com/experts) (GDE) in Google Cloud (GCP) & Google Workspace (GWS), author of Prentice Hall's bestselling ["Core Python"](https://corepython.com) series, co-author of ["Python Web Development with Django"](https://withdjango.com), and has written for Linux Journal & CNET. He runs [CyberWeb](https://cyberwebconsulting.com) specializing in GCP & GWS APIs and serverless platforms, [Python & App Engine migrations](https://appenginemigration.com), and Python training & engineering. Wesley was one of the original Yahoo!Mail engineers and spent 13+ years on various Google product teams, speaking on behalf of their APIs, producing sample apps, codelabs, and videos for [serverless migration](http://bit.ly/3xk2Swi) and [GWS developers](http://goo.gl/JpBQ40). He holds degrees in Computer Science, Mathematics, and Music from the University of California, is a Fellow of the Python Software Foundation, and loves to travel to meet developers worldwide at conferences, user group events, and universities. Follow he/him [@wescpy](https://twitter.com/wescpy) & his [technical blog](https://dev.to/wescpy). Find this content useful? [Contact CyberWeb](https://forms.gle/bQiDMiGyGrrwv5sy5) if you may need help or [buy him a coffee (or tea)](http://buymeacoffee.com/wescpy)!
</small>
| wescpy |
1,470,501 | What's Your Wildest Unfulfilled Coding Project Idea? | Got any coding project ideas that are totally out of the box? Share your wildest and most... | 22,092 | 2023-05-17T07:00:00 | https://dev.to/codenewbieteam/whats-your-wildest-unfulfilled-coding-project-idea-kog | discuss, beginners, codenewbie | Got any coding project ideas that are totally out of the box? Share your wildest and most unconventional project concept that you've been itching to bring to life but haven't had the chance yet. Let's inspire each other with our creative and unconventional ideas!
---
Follow the [CodeNewbie Org](https://dev.to/codenewbieteam) and [#codenewbie](https://dev.to/t/codenewbie) for more discussions and online camaraderie!
*{% embed* [https://dev.to/codenewbieteam](https://dev.to/codenewbieteam) *%}*
| ben |
1,470,510 | The Next Language Evolution | Let's talk about language and how it's linked to the future of Software Engineers and Programmers.... | 0 | 2023-05-16T23:31:58 | https://dev.to/btfranklin/the-next-language-evolution-3afo | language, future, engineering, ai | Let's talk about language and how it's linked to the future of Software Engineers and Programmers. Now, these terms—Software Engineer, Programmer, Software Developer, Coder—often get used interchangeably. But I believe we're going to see more differences between them soon.
The best way to understand this is by looking at language.
What does language do? It's a tool for moving and storing information. It could be written, spoken, or even stored on a computer hard drive. Language helps us put structure to our thoughts or instructions, be it an image in our mind or a set of steps to follow.
Now, when we talk about "programming," language is how we tell a computer to do something step by step to get a result.
Machine language, the most basic form of programming, is tough to use for complex instructions. That's why we came up with Assembly Language, a step up but still pretty hard to use. So, we created compiled languages like C for more complex tasks. But underneath C, Assembly Language was still there.
Compiled Languages still had their challenges. So, we invented even more languages, with better ways of organizing and expressing our instructions.
And then we got to a point where compiling was a limitation. So we created languages that didn't need compiling and were easier to read and change.
The big thing here is that as we changed languages, we changed the role of the "programmer." But these changes weren't random. We wanted better ways to move and store information—specifically, algorithms. Programmers didn't disappear, but their jobs changed. Software engineering didn't disappear either, but the level at which it could operate got higher. Each time, we hid away the boring and tedious stuff, improving the human experience of engineering.
Fast forward to today, we have Large-Language Models (LLMs), a big leap in how we use language to give instructions. We can now use everyday language for tedious and error-prone tasks, making our work more abstract and focused on higher-level goals. Isn't this just the next stage in how we use language to give instructions? Why would this make engineers irrelevant?
This is a new era of **empowerment**! This will allow smart engineers to give more complex instructions with ease and have a better experience than ever before. Each step in programming language advancement opened new doors and made it easier for people to enter the field, and this will too.
I don't believe AI will replace engineers. Rather, it will make them more expressive and powerful, and it will allow us to think about tackling more advanced projects. We may have fewer "coders" and "programmers", since those are jobs whose definition is bound to the languages used to perform them.
But just like nobody longs to be a “punchcard technician” today, nobody will be sad that we’ve moved on.
| btfranklin |
1,470,577 | Rust中的“废话” | 相比于Python语言的简洁,Rust中存在很多废话 1、 let... | 0 | 2023-05-17T01:50:48 | https://dev.to/dragon72463399/rustzhong-de-fei-hua--l6b | rust, python | ### 相比于Python语言的简洁,Rust中存在很多废话
---
- 1、 `let` 定义变量时会用到这个参数,而Python并没有,这个并没有存在的必要性啊;大家都懂,何必多加一个词汇呢,显得冗余
```
let x = 5;
```
- 2、 `;` 每行代码结束时有一个这个结束符;Js也是这种风格,应该是有很多语言都是这种风格,但是没有啥必要性啊;懂的都懂,显得冗余
```
let x: i32 = 42;
```
- 3、 `: <type>` 申明变量类型;这点不错,不是强制的,可写可不写;用在定义变量,变量名和赋值操作之间;如果不写,编译器会自动识别,感觉还是没有写的必要性;懂得都懂,这个让编译器自动去干就完事了嘛;尽量减少开发人员的工作量和无脑的申明
> 最为常用的规范中用‘隐式声明’,也就是不申明变量类型;代码中明确写出来的是叫‘显示声明’
- 4、 `-> ()` 指定返回值;rust main 函数一定得有一个空元祖返回值,也就是`()`;但是这不是强制这么写的,一般省略不写,编译器能自动识别;这是显式的写法,但是更为常用的是隐式的写法;
```
fn main() -> () {
// 程序逻辑(显式)
}
```
```
fn main() {
// 程序逻辑(隐式)
}
```
- 5、 `-> i32`;指定函数的返回类型;如下:
```
fn add_numbers(a: i32, b: i32) -> i32 {
let sum = a + b;
return sum;
}
```
> 除了main函数和无返回值的函数不用强制制定返回类型,其他的常规函数都必须强制指定返回类型;写惯了Python的人,真的觉得这个很多余,懂得都懂,感觉没必要
| dragon72463399 |
1,470,663 | 04.01 - Redux and Redux Saga | App preview: Project files: src/index.html <!DOCTYPE html> <html... | 20,220 | 2023-05-17T04:52:18 | https://dev.to/adriangheo/0401-redux-and-redux-saga-14c1 | vscode | App preview:

Project files:

--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/index.html
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>SAGA-App</title>
</head>
<body>
<div class="container">
<div id="root"></div>
</div>
</body>
</html>
```
--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/index.js
```react
import React from 'react';
import createSagaMiddleware from 'redux-saga';
import { render } from 'react-dom';
import { createStore, applyMiddleware } from 'redux';
import { Provider } from 'react-redux';
import { logger } from 'redux-logger';
import reducer from './reduxReducers';
import App from './App';
import rootSaga from './sagas';
// Create saga middleware
const sagaMiddleware = createSagaMiddleware();
// Create a redux store with reducer and middleware (saga and logger)
const store = createStore(
reducer,
applyMiddleware(sagaMiddleware, logger),
);
// Run the root saga
sagaMiddleware.run(rootSaga);
// Render the application
render(
<Provider store={store}>
<App />
</Provider>,
document.getElementById('root'),
);
// Enable Hot Module Replacement (HMR)
if (module.hot) {
module.hot.accept(App);
}
```
--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/sagas.js
```react
import { put, takeLatest, all } from 'redux-saga/effects';
// Define a generator function that fetches an image and dispatches an 'IMAGE_RECEIVED' action
function* fetchImage() {
const response = yield fetch('https://picsum.photos/200')
.then(response => response.blob());
const url = URL.createObjectURL(response);
yield put({ type: "IMAGE_RECEIVED", url: url || "Error fetching data" });
}
// Define a watcher saga that watches for 'GET_IMAGE' actions and calls fetchImage when one is dispatched
function* actionWatcher() {
yield takeLatest('GET_IMAGE', fetchImage)
}
// Define a root saga that runs all the sagas together
export default function* rootSaga() {
yield all([
actionWatcher(),
]);
}
```
--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/reduxReducers.js
```react
// Define a reducer function that initializes state and handles two action types: 'GET_IMAGE' and 'IMAGE_RECEIVED'
const reducer = (state = {}, action) => {
switch (action.type) {
case 'GET_IMAGE':
// When 'GET_IMAGE' is received, set loading to true
return { ...state, loading: true };
case 'IMAGE_RECEIVED':
// When 'IMAGE_RECEIVED' is received, update the image in the state and set loading to false
return { ...state, image: action.url, loading: false }
default:
// For all other action types, return the current state
return state;
}
};
// Export the reducer function
export default reducer;
```
--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/reduxActions.js
```react
// Define an action creator function getImage that returns an action of type 'GET_IMAGE'
export const getImage = () => ({
type: 'GET_IMAGE',
});
```
--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/App.jsx
```react
import React from 'react';
import Button from './containers/Button';
import ImageItem from './containers/ImageItem'
import Loading from './containers/Loading'
// Create a functional component App that renders a Button, a Loading and an ImageItem component within a div
let App = () => (
<div>
<Button />
<Loading />
<ImageItem />
</div>
);
// Export the App component
export default App;
```
--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/containers/ImageItem.jsx
```react
import React from 'react';
import { connect } from 'react-redux'
// Define a functional component ImageItem that renders an img element if the image prop is truthy
let ImageItem = ({ image }) => (
image ?
<article>
<img src={image} alt="Randomly generated" />
</article> :
null
);
// Define a mapStateToProps function that maps state to props
const mapStateToProps = (state) => ({
image: state.image,
})
// Connect the ImageItem component to the Redux store
ImageItem = connect(
mapStateToProps,
null
)(ImageItem)
// Export the connected ImageItem component
export default ImageItem;
```
--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/containers/Loading.jsx
```react
import React from 'react';
import { connect } from 'react-redux'
// Define a functional component Loading that renders a loading message if the loading prop is truthy
let Loading = ({ loading }) => (
loading ?
<div>
<h1>LOADING...</h1>
</div> :
null
);
// Define a mapStateToProps function that maps state to props
const mapStateToProps = (state) => ({
loading: state.loading
})
// Connect the Loading component to the Redux store
Loading = connect(
mapStateToProps,
null
)(Loading)
// Export the connected Loading component
export default Loading;
```
--------------------------------------------------------------------------------------------------------------------------
<hr/>
src/containers/Button.jsx
```react
import React from 'react';
import { connect } from 'react-redux';
import { getImage } from '../reduxActions'
// Define a functional component Button that renders a button and dispatches the getImage action when clicked
const Button = ({getImage}) => {
return (
<button
onClick={getImage}
>Press to see Image</button>
);
};
// Define a mapDispatchToProps function that maps dispatch to props
const mapDispatchToProps = {
getImage: getImage,
};
// Connect the Button component to the Redux store
export default connect(
null,
mapDispatchToProps,
)(Button);
```
-------------------------------------------------------------------------------------------------------------------------- | adriangheo |
1,470,714 | Zero Trust Network for Microservices with Istio | Security was mostly perimeter-based while building monolithic applications. This means securing the... | 0 | 2023-05-17T07:43:57 | https://imesh.ai/blog/zero-trust-network-for-microservices-with-istio/ | istio, kubernetes, microservices, zerotrust | ---
title: Zero Trust Network for Microservices with Istio
published: true
date: 2023-03-24 07:41:06 UTC
tags: istio,Kubernetes,microservices,zerotrust
canonical_url: https://imesh.ai/blog/zero-trust-network-for-microservices-with-istio/
---
Security was mostly perimeter-based while building monolithic applications. This means securing the network perimeter and access control using firewalls. With the advent of microservices architecture, static and network-based perimeters are no longer effective.
Nowadays, applications are deployed and managed by container orchestration systems like Kubernetes, which are spread across the cloud. Zero trust network (ZTN) is a different approach to secure data across cloud-based networks. In this article, we will explore how ZTN can help secure microservices.
## What is Zero Trust Network (ZTN)?
Zero trust network is a security paradigm that does not grant implicit trust to users, devices, and services, and continuously verifies their identity and authorization to access resources.
In a microservices architecture, if a service (client) receives a request from another service (server), the server should not assume the trustworthiness of the client. The server should continuously authenticate and authorize a client first and then allow the communication to happen securely (refer to fig. A below).

_Fig. A – A Zero Trust Network (ZTN) environment where continuous authentication and authorization are enforced between microservices across multicloud_
## Why is a zero trust network environment inevitable for microservices?
The importance of securing the network and data in a distributed network of services cannot be stressed enough. Below are a few challenges why a ZTN environment is necessary for microservices:
1. **Lack of ownership on the network:** Applications moved from perimeter-based to multiple clouds and data centers with microservices. As a result, the network has also got distributed, giving more attack surface to intruders.
2. **Increased network and security breaches:** Data and security breaches among cloud providers have become increasingly common since applications moved to public clouds. In 2022, [nearly half of all data breaches occurred in the cloud](https://www.ibm.com/reports/data-breach).
1. **Managing multicluster network policies has become tedious:** Organizations deploy hundreds of services across multiple Kubernetes clusters and environments. Network policies are local to clusters and do not usually work for multiple clusters. They require a lot of customization and development to define and implement security and routing policies in multicluster and multicloud traffic. Thus, configuring and managing consistent network policies and firewall rules for each service becomes an everlasting and frustrating process.
1. **Service-to-service connection is not inherently secure in K8s:** By default, one service can talk to another service inside a cluster. So, if a service pod is hacked, an attacker can quickly hack other services in that cluster easily (also known as vector attack). Kubernetes does not provide out-of-the-box encryption or authentication for communication between pods or services. Although K8s offers additional security features like enabling mTLS, it is a complex process and has to be implemented manually for each service.
2. **Lack of visibility into the network traffic:** If there is a security breach, the Ops and SRE team should be able to react to the incident faster. Poor real-time visibility into the network traffic across environments becomes a bottleneck for SREs to diagnose issues in time. This impedes their ability for incident response, which leads to high mean time for recovery (MTTR) and catastrophic security risks.
In theory, a zero trust network (ZTN) philosophy solves all the above challenges. Istio service mesh helps Ops and SREs to implement ZTN and secure microservices across the cloud.
Please read [top 10 pillars of zero trust network considered by top CISOs](https://imesh.ai/blog/top-10-pillars-of-zero-trust-network/).
## How Istio service mesh enables ZTN for microservices
Istio is a popular open-source service mesh implementation software that provides a way to manage and secure communication between microservices. Istio abstracts the network into a dedicated layer of infrastructure and provides visibility and control over all communication between microservices.
Istio works by injecting an Envoy proxy (a small sidecar daemon) alongside each service in the mesh (refer to fig. B). Envoy is an L4 and L7 proxy that helps in ensuring security connections and network connectivity among the microservices, respectively. The Istio control plane allows users to manage all these Envoy proxies, such as directly defining and cascading security and network policies. (More on Istio architecture and its components will be explained soon in another blog.)

_Fig B – Istio using Envoy proxy to secure connections between services across clusters and clouds_
Istio simplifies enforcing a ZTN environment for microservices across the cloud. Inspired by [Gartner Zero Trust Network Access](https://www.gartner.com/smarterwithgartner/new-to-zero-trust-security-start-here), we have outlined four pillars of zero trust network that can be implemented by Istio.

_Four pillars of zero trust network enforced by Istio service mesh_
### 1. Enforcing Authentication with Istio
Security teams would be required to create authentication logic for each service to verify the identity of users (humans or machines) that sent requests. This is necessary to ensure the trustworthiness of the user.
In Istio, it can be done by configuring peer-to-peer and request authentication policies using `PeerAuthentication` and `RequestAuthentication` custom resources (CRDs):
1. Peer authentication policies involve authenticating service-to-service communication using mTLS. That is, certificates are issued for both the client and server to verify the identity of each other.
Below is a sample `PeerAuthentication` resource that enforces strict mTLS authentication for all workloads in the `foo` namespace:
```
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: foo
spec:
mtls:
mode: STRICT
```
1. Request authentication policies involve the server ensuring whether the client is even allowed to make the request. Here, the client will attach JWT (JSON Web Token) to the request for server-side authentication.
Below is a sample `RequestAuthentication` policy created in the `foo` namespace. It specifies that incoming requests to the `my-app` service must contain JWT that is issued, and verified using public keys by entities mentioned under `jwtRules`.
```
apiVersion: security.istio.io/v1beta1
kind: RequestAuthenticationetadata:
metadata:
name: jwt-example
namespace: foo
spec:
selector:
matchLabels:
app: my-app
jwtRules:
– issuer: “https://issuer.example.com”
jwksUri: “https://issuer.example.com/keys”
```
Both authentication policies are stored in Istio configuration storage.
### 2. Implementing authorization with Istio
Authorization is verifying whether the authenticated user is allowed to access a server (access control) and perform the specific action. Continuous authorization prevents malicious users from accessing services, which ensures their safety and integrity.
`AuthorizationPolicy` is another Istio CRD that provides access control for services deployed in the mesh. It helps in creating policies to deny, allow, and also perform custom actions against an inbound request. Istio allows setting multiple policies with different actions for granular access control to the workloads.
The following `AuthorizationPolicy` denies POST requests from workloads in the `dev` namespace to workloads in the `foo` namespace.
```
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: foo
spec:
action: DENY
rules:
– from:
– source:
namespaces: [“dev”]
to:
– operation:
methods: [“POST”]
```
### 3. Multicluster and multicloud visibility with Istio
Another important pillar of ZTN is network and service visibility. SREs and Ops teams would require real-time monitoring of traffic flowing between microservices across cloud and cluster boundaries. Having deep visibility into the network would help SREs quickly identify the root cause of anomalies, develop resolution, and restore the applications.
Istio provides visibility into traffic flow and application health by collecting the following telemetry data from the mesh from the data and control plane.
1. **Logs:** Istio collects all kinds of logs such as services logs, API logs, access logs, gateway logs, etc., which will help to understand the behavior of an application. Logs also help in faster troubleshooting and diagnosis of network incidents.
2. **Metrics:** They help to understand the real-time performance of services for identifying anomalies and fine-tuning them in the runtime. Istio provides many metrics apart from the 4 golden ones, which are error rates, traffic, latency, and saturation.
3. **Distributed tracing:** It is the tracing and visualizing of requests flowing through multiple services in a mesh. Distributed tracing helps understand interactions between microservices and provides a holistic view of service-to-service communication in the mesh.
### 4. Network auditing with Istio
Auditing is analyzing logs of a process over a period with the goal to optimize the overall process. Audit logs provide auditors with valuable insights into network activity, including details on each access, the methods used, traffic patterns, etc. This information is useful to understand the communication process in and out of the data center and public clouds.
Istio provides information about who accessed (or requested), when, and onto what resources, which is important for auditors to investigate faulty situations, and then suggest steps to improve the overall performance of the network and security of cloud-native applications.
## Deploy Istio for a better security posture
The challenges around securing networks and data in a microservices architecture are going to be increasingly complex. Attackers are always ahead in finding vulnerabilities and exploiting them before anyone in the SRE team gets time to notice.
Implementing a zero-trust network will provide visibility and secure Kubernetes clusters from internal or external threats. Istio service mesh can lead this endeavor from the front, with its ability to implement zero trust out of the box. IMESH helps enterprises to onboard and adopt Istio service mesh without any operation hassle. Check out our [offerings](https://www.imesh.ai/pricing.html.).
**About IMESH**
[IMESH](https://imesh.ai/) offers solutions to help you avoid errors during the experimentation of implementing Istio and fend off operational issues. IMESH provides a platform built on top of Istio and Envoy API gateway to help start with Istio from Day-1. IMESH Istio platform is hardened for production and is fit for multicloud and hybrid cloud applications. IMESH also provides consulting services and expertise to help you adopt Istio rapidly in your organization.
IMESH also provides a strong visibility layer on top of Istio which provides Ops and SREs a multicluster view of services, dependencies, and network traffic. The visibility layer also provides details of logs, metrics, and traces to help Ops folks to troubleshoot any network issues faster.
The post [Zero Trust Network for Microservices with Istio](https://imesh.ai/blog/zero-trust-network-for-microservices-with-istio/) appeared first on [IMESH](https://imesh.ai/blog). | 0anas0 |
1,470,728 | Create Portfolio Website Using HTML and CSS (Source Code) | Well, today I’ll be making a visually delicious Portfolio Website Using Html and CSS Source Code. In... | 0 | 2023-05-17T06:09:14 | https://dev.to/cwrcode/create-portfolio-website-using-html-and-css-source-code-5d5c | Well, today I’ll be making a visually delicious **[Portfolio Website ](https://www.codewithrandom.com/2023/03/14/simple-portfolio-website-using-html-css-portfolio-website-source-code/)**Using Html and CSS Source Code. In this article, you get completed code and an explanation about the portfolio website using Html and CSS.
**What is a portfolio website?**
Well, everyone needs websites and web applications nowadays. So there are many opportunities for you if you work as a web developer. But if you want to get a web developer job, you'll need a good portfolio website to showcase your skills and experience. A developer portfolio website provides relevant information to potential employers about your skills, experience, and projects you've worked on. You can consider it to be your online résumé.
**Portfolio Website With Source code:-**
So, our ingredients for this Portfolio Website will be HTML and CSS. I’m assuming you have a basic knowledge of both. When you actually want to deploy this portfolio website, you’ll need to work on the backend so that viewers can click and navigate to the desired pages.
But today is all about looks, i.e., the front end! Even JavaScript remains out of the scope because I want you to know how HTML and CSS alone can shine as great website makers (and also cause I want to keep this beginner-friendly).
First and foremost, we need to make the backbone of our website, an HTML file. I made the HTML file and named it ‘index.html’. Since we'll be using CSS as well, in the same folder I made a CSS file, named ‘styles.css’. In index.html, we need to give the following lines of code.
## Portfolio HTML Code:-
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="./styles.css">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Lato:wght@100;400;700&family=Poppins:wght@200&display=swap"
rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@200&display=swap" rel="stylesheet">
<title>Portfolio</title>
</head>
<body>
</body>
</html>
```
If you’re using VS Code and have an HTML extension installed, you just need to type ‘!’ and press enter. If you’re not, feel free to copy-paste these lines.
Discussing them is not necessary, just know that they let the browser know you're trying to run a webpage. Also, they are importing all the CSS you'll be writing in 'styles.css'.
Now take a look at what we’re building.
I know, I know, it's a masterpiece, isn't it? And no, I am not the handsome guy in the picture... I look a lot better. Anyway now look only at the top.
This is the navigation bar. It’s like a must-have on all your websites. So, we’ll make this first. Make sure to write the code between the opening and closing body tags (body/body).
```
<body>
<header>
<div class="container">
<nav class="flex items-centre justify-between">
<div class="left flex justfiy-right">
<div class="logo">
<img src="./images/logo.png" width="50px" alt="logo">
</div>
<div>
<a href="#">Home</a>
<a href="#">About</a>
<a href="#">Services</a>
<a href="#">Blog</a></li>
<a href="#">More</a>
</div>
</div>
<div class="right">
<button class="btn btn-primary">Contact</button>
</div>
</nav>
</div>
</header>
```
The logo I used was downloaded from a website. Relax, it’s royalty free. In fact, all the icons and images in this blog are downloaded. While there’s no denying that these icons and pictures play a huge part in your website’s appearance, my goal was to tell you how to incorporate these pictures in your site and not to spend hours in Photoshop designing my own beautiful and ‘original’ icons.
But whenever you make a website, it’s your duty to include visually appealing images, that too original ones. If you’re following this tutorial, try to take out some time and get your own logo. You can make one here or simply download it from here. You're welcome.
So this is what our nav bar looks for now:
Don't worry. You'll get exactly what was advertised, just follow along. Let's put in that model's image and the text we want to display.
```
<div class="hero flex items-centre justify-between">
<div class="left flex-1 justify-center">
<img src="./images/main-img.png" alt="Profile">
</div>
<div class="right flex-1">
<h6>Akshat Sharma</h6>
<h1>I'm a Web<br> <span>Developer</span></h1>
<p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Doloremque illum nam nobis, minima
laudantium fugit sequi nostrum quod impedit, beatae necessitatibus praesentium optio labore
nemo!</p>
<div>
<button class="btn btn-secondary">DOWNLOAD CV</button>
</div>
</div>
</div>
```
Unless your name is Akshat Sharma, which makes us name twins, by the way, make sure to change the name you're displaying in the h6 tag.
You should also replace the Lorem Ipsum text with something more suitable. In fact, any text that is being displayed is subject to change as per your wish.
By now, you're probably itching to use CSS. I originally planned to keep the styling at last but it is important for the developer to know how his/her website is looking at any given time to make further changes. Let's dive into CSS right away!
A piece of advice: Your CSS file will go on forever if you keep writing in one. That’s why I made two files, a ‘styles.css’, for styling all our classes and elements, and a ‘utilities.css’, for defining styles we might need for more than one element. You'll see as we continue.
```
@import 'utilities.css';
:root{
--primary: rgb(29, 221, 189);
--bgDark: rgb(12, 12, 12);
--white: rgb(250, 250, 250);
--secondary: rgb(0, 59, 50);
--bgLight: rgb(190, 181, 181);
}
```
This is the beginning of 'styles.css' Notice we need to import ‘utilities.css’ to incorporate all the styling we're doing in that. I decided on a color scheme beforehand so that I don't keep breaking my flow thinking of colors.
I declared variables corresponding to the colors I'll need using the ': root'. CSS understands colors only as RGB values. These values, corresponding to their respective colors, are impossible to remember.
The HTML CSS Support extension for VS Code helped me select my colors easily. See? Cool IDEs always help. In case you're not able to use that, you can get your color's RGB, HSV, or hexadecimal value from here.
## CSS Code For Portfolio:-
```
*{
padding: 0;
margin: 0;
box-sizing: border-box;
-webkit-font-smoothing: antialiased;
}
header{
background-color: var(--bgDark);
clip-path: polygon(0 0, 100% 0, 100% 100%, 73% 94%, 0 100%);
}
header nav .left a{
color: var(--white);
text-decoration: none;
margin-right: 2rem;
text-transform: uppercase;
transition: all .2s ease;
}
header nav .left a:hover{
color: var(--primary);
}
header nav {
padding: 2rem 0;
}
header nav .logo{
margin-right: 3rem;
}
```
This is the CSS for the header and navigation bar and the code goes in 'styles.css'. And this is the product:
Black is my favorite color! It's a real lifesaver when you have no clue how you want your background to be. Take a look at the CSS for the 'hero' class now.
```
body{
font-family: 'Poppins', sans-serif;
}
.container{
max-width: 1152px;
padding: 0 15px;
margin: 0 auto;
}
.hero{
padding-top: 2rem;
padding-bottom: 3rem;
}
.hero .left img{
width: 400px;
}
.hero .right {
color: var(--white);
margin-top: -7rem;
}
.hero .right h6{
font-size: 1.6rem;
color: var(--primary);
margin-bottom: 0.5rem;
}
.hero .right h1{
font-size: 4rem;
font-weight: 100;
line-height: 1.2;
margin-bottom: 2rem;
}
.hero .right h1 span{
color: var(--primary);
}
.hero .right p{
line-height: 1.9;
margin-bottom: 2rem;
}
```
Did you notice how I avoid using pixel values as much as possible? Thing is, pixels are absolute units. Changing other values have no effect on px values and this makes it pretty antiquated. 'em' and 'rem', however, change their values with respect to the change in parent or root elements. It makes them responsive. Read more about the differences here. The explanation's pretty good.
Now to work on those buttons and the overall alignment we'll type our magic spell, the one you call CSS. This one goes in our 'utilities.css'. Open the file and type the following:
```
.flex{
display: flex;
}
.items-centre{
align-items: center;
}
.justify-between{
justify-content: space-between;
}
.justify-center{
justify-content: center;
}
.justify-right{
justify-content: right;
}
.btn{
padding: 0.6rem 2rem;
font-size: 1rem;
font-weight: 600;
border: 2px solid transparent;
outline: none;
cursor: pointer;
text-transform: uppercase;
transition: all .2s ease;
}
.btn-primary{
background-color: var(--primary);
color: var(--secondary);
margin-top: -15rem;
}
.btn-primary:hover{
background: transparent;
border-color: var(--primary);
color: var(--primary);
}
.flex-1{
flex: 1;
}
.btn-secondary{
background: transparent;
color: var(--primary);
border-color: var(--primary);
}
.btn-secondary:hover{
background: var(--primary);
color: var(--secondary);
}
```
**The end product? Here it is!**
```
<section class="about">
<div class="container flex items-centre">
<div class="left flex-1 justify-right">
<img src="./images/man2.png" height="400px" alt="profile pic">
</div>
<div class="right flex-1">
<h1>About <span>Me</span></h1>
<h3>Hello! I'm Akshat Sharma.</h3>
<p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Atque adipisci distinctio obcaecati aliquid,
quia tempora quis optio repudiandae officia earum?
Lorem ipsum dolor sit amet consectetur, adipisicing elit.
</p>
<div class="socials">
<a href="#"><img src="./images/website.png" width="40px"></a>
<a href="#"><img src="./images/facebook.png" width="40px"></a>
<a href="#"><img src="./images/instagram.png" width="40px"></a>
<a href="#"><img src="./images/media-player.png" width="40px"></a>
</div>
</div>
</div>
</section>
```
It is imperative for you to understand how 'px', 'em', and 'rem' works. And what's a better way of learning than to try it out yourself? Play with these values and understand the difference. Change them according to the look you desire and don't even worry about messing up the code. You can come back here again and again. Whether you want to copy-paste the code or understand how it works, we've got you covered.
Again, the icons and the images used were downloaded from free-to-use websites. You can choose any images you like. Just make sure to give the full path in 'img src=" "' part. We used the classes defined in the 'utilities.css' file to ease our layout work and will continue to do so. So now you get why we made two CSS files.
Though not extraordinary, this design resonates with the formal vibe you'd want to give to your website.
```
section{
padding: 6rem;
}
section.about h1{
margin-bottom: 1rem;
font-size: 1.6rem;
font-weight: 600;
}
section.about h1 span{
color: var(--primary);
}
section.about h3{
font-size: 1rem;
margin-bottom: 1rem;
font-weight: 600;
}
section.about p{
font-family: 'Lato', sans-serif;
color: var(--secondary);
line-height: 1.9rem;
margin-bottom: 2rem;
}
section.about .socials{
display: flex;
}
section.about .socials a{
display: flex;
align-items: center;
justify-content: center;
width: 35px;
margin-right: 0.8rem;
border-radius: 50%;
}
section.about .socials a:hover{
background: var(--primary);
}
```
This CSS code goes in the 'styles.css'. It handles the margins and paddings individually, but we don't have to do much as our previously defined classes already handled most of the layout process.
To make our website look a little responsive, I use the ': hover' selector a lot. I've used it on all our buttons and social media icons. Basically, it defines what happens when the cursor points to or 'hovers' over the specified element. A little color transition is enough to breathe life into our webpage.
Well, this does smell nice! We'll now be working on the last part, though certainly not the least. Although your CV is a true record of your skills and achievements, we need your services to be visible to the viewer at a glance.
Define another section in the 'index.html' and type in the code as follows:
```
<section class="services">
<div class="container">
<h1 class="services-head">Services</h1>
<p>All your digital needs... covered.</p>
<div class="card-grid">
<div class="card">
<img src="./images/graphic-design.png">
<h2>Graphic Desgin</h2>
<p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nulla, debitis?</p>
</div>
<div class="card">
<img src="./images/world-wide-web.png">
<h2>Web Development</h2>
<p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nulla, debitis?</p>
</div>
<div class="card">
<img src="./images/blogger.png">
<h2>Content Writing</h2>
<p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Nulla, debitis?</p>
</div>
</div>
</div>
</section>
```
Bold is beautiful, sure. But this is too loud, and, in fact, a disgrace to our development skills. Let's do the aligning and styling and make it look a little subtle.
Now even though flexbox is an awesome tool to help us with layouts, we have one more trick up our sleeve. This, my friends, is called the CSS Grid Layout, or simply, the Grid. Simply put, the grid is our trump card in handling layouts and designs. In fact, it has the responsive touch we need for our sites. I know how much you love reading long documents, so here you go! There are a number of YouTube tutorials to learn from as well, so go explore!
Our 'styles.css' just needs the following block of code now.
```
section.services{
background: rgb(17, 17, 17)
}
.services-head{
color: rgb(10, 9, 9);
text-align: center;
margin-bottom: 1rem;
line-height: 0.5rem;
color: var(--primary);
}
.services-head + p{
color: var(--white);
font-family: 'Lato', sans-serif;
margin-bottom: 1rem;
text-align: center;
margin-bottom: 6rem;
font-weight: 400;
}
.card img{
width: 50px;
background:white;
}
section.services .card-grid{
display: grid;
grid-template-columns: repeat(3,1fr);
column-gap: 2rem;
}
section.services .card-grid .card{
background: var(--white);
padding: 3rem 2rem;
position: relative;
text-align: center;
transition: all .2s ease;
}
section.services .card-grid .card img{
position: absolute;
top: -1.5rem;
left: 50%;
transform: translateX(-50%);
color: var();
}
section.services .card-grid .card h2{
font-weight: 600;
font-size: 1.2rem;
margin-bottom: 0.5rem;
}
section.services .card-grid .card p{
font-family: 'Lato', sans-serif;
color: var(--seconday);
line-height: 1.6;
}
section.services .card-grid .card:hover{
background: var(--primary);
}
section.services .card-grid .card:hover h2{
color: var(--white);
}
section.services .card-grid .card:hover p{
color: var(--white);
}
```
And my dear friends, we did it! No really. That's all. Our beautiful **[Portfolio Website](https://www.codewithrandom.com/2023/03/14/simple-portfolio-website-using-html-css-portfolio-website-source-code/)** is ready to be shown off to the world.
These cards have the hover property as well. So when u take the cursor on them, the background color and the text color, both change.
Our cuisine is finally prepared! This was fun, was it not? The entire code and its end product are all yours now. Change the colors, and the properties, and feel free to mess it up as much as you want.
I mean that's the best way to learn anything, trust me. Feel free to ask me as many doubts as you want in the comments below. Alternatively, you could reach out to me on Instagram. I like my DMs filled.
So, with a heavy heart, I'll be taking your leave. But hey, I'll be back with more blogs, so you don't go anywhere! Thanks for visiting CodeWithRandom, goodbye!
-Akshat Sharma | cwrcode | |
1,474,655 | "this" in JavaScript and "self" in Python; Any Difference? | We use this in JavaScript and self in Python. Are they same? This question was haunting me for some... | 0 | 2023-05-21T00:01:29 | https://dev.to/ibtesum/this-in-javascript-and-self-in-python-any-difference-38fm | javascript, python, programming | We use `this` in JavaScript and `self` in Python. Are they same? This question was haunting me for some time. Finally after some digging I came up with a solution. Hope you will love it.
Personally I love examples. So without further ado, let's jump into some examples.
Here is some JS code:
```js
function Love(partnerName, myName){
this.partnerName = partnerName,
this.myName = myName,
this.get_married = function(){
return `${this.partnerName} and ${this.myName} got married.`
}
}
const couple1 = new Love("Ross" , "Rachel")
const couple2 = new Love("Chandler", "Monica")
console.log(couple1.get_married())
console.log(couple2.get_married())
```
Here is the same code *translated* to Python:
```python
class Love:
def __init__(self, partnerName, myName):
self.partnerName = partnerName
self.myName = myName
def get_married(self):
return f"{self.partnerName} and {self.myName} got married."
couple1 = Love("Ross", "Rachel")
couple2 = Love("Chandler", "Monica")
print(couple1.get_married())
print(couple2.get_married())
```
Now, what do you think their output would be? Would there be any difference? Think about it for at least one minute.
So the output for both the JS code and Python code would be:
```bash
Ross and Rachel got married.
Chandler and Monica got married.
```
Now we see that both `this` and `self` acts similarly. Yes! There is no difference.
But...
I **lied, partially**. Though their output is the same, `this` and `self` are not quite same.
## Value of `this`:
The value of `this` depends on the context in which it appears. This context can be a `function` , `class` or global(`Window` for browser, `global` for Node.js).
For easier understanding, the value of `this` depends on how the function is called.
**Note:** JavaScript functions can be called in four ways!
Such as:
- Calling a function as a function.
- Calling a function as method.
- Calling a function as constructor.
- Calling a function by `call` and `apply`
## Value of `self`:
On the other hand, in Python, the value of `self` depends on the instance created from the class. In this case, the object created from the `couple1` instance, is the value of `self`. And `couple2` is the value of `self` for the second object.
We can prove it by the following code:
```python
class Love:
def __init__(self, partnerName, myName):
self.partnerName = partnerName
self.myName = myName
def get_married(self):
print("id of self: ",id(self)) # Notice this line. It prints the id of "self".
return f"{self.partnerName} and {self.myName} got married."
couple1 = Love("Ross", "Rachel")
couple2 = Love("Chandler", "Monica")
print("id of couple1: ", id(couple1)) # Notice this line. It prints the id of "couple1".
print(couple1.get_married())
print("id of couple2: ", id(couple2)) # Notice this line. It prints the id of "couple2".
print(couple2.get_married())
```
The output would be:
```bash
id of couple1: 140337639399376
id of self: 140337639399376 # This number is the same as the above number.
Ross and Rachel got married.
id of couple1: 140337640683984
id of self: 140337640683984 # This number is the same as the above number.
Chandler and Monica got married.
```
If you look closely to the ids, you will notice its the same for `self` and the object instances. This proves that `self` is actually denoting the instances(couple1 and couple2)
## Differences:
1. First of all, `this` is a **reserved** keyword. `self` is not. You can use anything in lieu of `self`. But it has to be the first parameter.
2. `this` is dynamically bound(runtime bound), meaning it changes its behavior based on how the function is called. There are many ways to call a function in JavaScript. On the other hand, `self` in Python is statically bound. It means the value of `self` is determined by the instance of the `class` from where the method is called from.
These are the fundamental differences between them. If you have any further query, feel free to comment.
Happy Coding!
| ibtesum |
1,479,400 | How to publish a React Component without even including React? | React is a JavaScript library which is used to build attractive User-Interfaces (UIs). Now, most of... | 0 | 2023-05-25T16:44:05 | https://dev.to/aryan_shourie/how-to-publish-a-react-component-without-even-including-react-3kg1 | react, npm, javascript, webdev | React is a **JavaScript library** which is used to build attractive **User-Interfaces** (UIs). Now, most of us know what React is and we also know that to publish a React Component, it is necessary to include the React library.
But, what if I tell you, that it is possible to publish a React Component without even including React! Sounds weird, right?
In this article, I will demonstrate how to publish a React Component to the official [npm.js website](https://www.npmjs.com/)
## Components in React
React has 2 types of components:
- Function based Component
- Class based Component
Both of these Components can be rendered easily using the **ReactDOM.render()** function from the React library. This function takes 2 arguments, the HTML code and the HTML element.
The sample code is given below:
```
import React from 'react';
import ReactDOM from 'react-dom/client';
import './index.css';
import App from './App';
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);
```
In the above code, we are Rendering the **App** component using the **ReactDOM.render()** function.
## Publishing React Component without React
If we want to publish a React Component without including React itself, we can publish the Component as a **standalone module** that specifies React as a **Peer Dependency**.

## Basic Setup:
Follow these steps to create the application:
**Step 1**: Create a React App using the following command-
```
npx create-react-app peering-example
```
**Step 2**: Create a new directory for your component and navigate into it-
```
mkdir my-react-component
cd my-react-component
```
**Step 3**: Initialize a new Node.js module in the directory-
```
npm init
```
**Step 4**: Install React as a Peer Dependency-
```
npm install --save-peer react
```
**Step 5**: Create 2 files-
```
touch testPackage.js
touch index.js
```
## Project Structure:
Once this process is complete, your folder structure should look something like this-

Your **package.json** file inside the **my-react-component** folder should look something like this-
```
{
"name": "test-package-for-publish",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"peerDependencies": {
"react": "^18.2.0"
}
}
```
**Step 6**: Now, we will create a default component in the **testPackage.js** file-
```
import React from 'react';
function testPackage() {
return (
<h1>This is a sample package</h1>
);
};
export default testPackage;
```
**Step 7**: We will now export the component as **default** from the **index.js** file-
```
export { default } from './testPackage';
```
**Step 8**: Now, we will publish our component to the **npm.js** website-
```
npm publish
```
Running the above command will give you the following error-

This means that if we want to publish a package, we need to be logged into the npm.js website.
Create your account or login into the website - [https://www.npmjs.com/](https://www.npmjs.com/)

Once logged in, login into your account using the Visual Studio Code Terminal -
```
npm login
```
After running this command, you will be asked your username, email and password -

When you enter the correct details, you will be logged into your account -

Now, we can easily publish our component using the command -
```
npm publish
```
After your package has been published, you will get a confirmation -

Hooray!! You have successfully published a custom package to the official [npm.js](https://www.npmjs.com/) website.
You can even check out your published package in your profile section of the website.
Now, to check whether your published package works correctly or not, navigate to the React App -
```
cd peering-example
```
Now, install your published package in the React App -
```
npm install dev-test-package-for-publish
```
When your package will be installed successfully, it will be added to your **package.json** file -

Use this package in your **App.js** file -
```
import React from "react";
import testPackage from "dev-test-package-for-publish";
import './App.css';
function App() {
return (
<>
<testPackage/>
</>
);
}
export default App;
```
The output on your screen will be -

And thats it! You have successfully published your custom package without using React!
Connect with me on Linkedin :- [Linkedin](https://www.linkedin.com/in/aryandev-shourie-175025229/)
Do check out my Github for amazing projects:- [Github](https://github.com/Aryan2727-debug)
View my Personal Portfolio :- [Aryan's Portfolio](https://personal-portfolio-aryan.netlify.app/) | aryan_shourie |
1,482,228 | Host your Automation Allure Report on GitHub Pages with GitHub Actions. | We all have heard of or used Allure Report in our day-to-day life. Additionally, GitHub Actions and... | 0 | 2023-05-27T05:07:52 | https://dev.to/sadia/host-your-automation-allure-report-on-github-pages-with-github-actions-56a | githubactions, githubpage, automation, testing | We all have heard of or used Allure Report in our day-to-day life. Additionally, GitHub Actions and GitHub Pages are also commonly used by many people in their daily lives. Will integrate all of them and learn something new. First of all, let's recap what these three are.
## Allure Report
An Allure report is an open-source framework designed to create interactive and visually appealing test reports. These reports are generated automatically by test automation frameworks such as Selenium, Appium, and TestNG.
## GitHub Pages
GitHub Pages is a static site hosting service offered by GitHub that allows developers to publish web content directly from a GitHub repository. It takes HTML, CSS, and JavaScript files.
## GitHub Actions
GitHub Actions is a platform that allows developers to automate software workflows related to building, testing, and deploying their code. It can be used to automate the process of generating and publishing Allure reports on GitHub Pages.
## **In this document, you will learn how to host your automation Allure report on GitHub Pages using GitHub Actions.**
When using GitHub Pages, you receive a subdomain in the format of `github.io` for hosting your website. In order to host your site, the first requirement is to have a GitHub account and a repository containing the automation code to execute the test cases. Your site will be named as `<username>.github.io/<repository name>`
It's important to note that the repository needs to be public in order to host a site with GitHub Pages.
Now, let's talk about how it can help us. As an automation tester, we know how frustrating it is to share our daily progress reports with management. Usually, we use tools like Extent Report or Allure Report and send these reports by email to show how our automation work is going, whether it's happening every day or every week.
But there's a problem with using Allure Report. We have to send a whole folder called "allure-results" and run a command to see it, which is a lot of extra work. But what if we didn't have to go through all that trouble? We can use GitHub Pages instead. We can put our automation reports there and just give management a link. So whenever they want to see our latest progress, they can easily click on the link and check it out. Isn't that amazing?
Now, let's get started and explore it further.
In this demonstration, I will present a project that utilizes the following technical stack along with the use of GitHub Actions.

Now, let's take a look at the entire process step by step.
# **Step 1:**
1. Open up a IDE of your choice and setup a Maven Project.
2. Add all the required dependencies as mentioned above in your pom.xml file.
3. Write you tests in a Java Class.

4.Run your test file.
After successfully running your test, you will see a folder named allure-results.
# **Step 2:**
After you finish writing your code, you should upload it to GitHub. If you don't have a GitHub account, you need to create one first. Once you have an account, create a new repository and then upload your code to GitHub.

# **Step 3:**
1.Go to Action Tab. You have two options: either create a workflow using the suggested one, or set up your own workflow. Since you're using the Maven build tool, if you choose the suggested option, make sure to select Java with Maven.

2.You can choose to click on the setup workflow either by yourself or using Java with Maven. Change the file name to whatever you want (I changed it to ci.yml) and copy and paste the provided code.

As you see above what we are doing is whenever there is a new push or Pull request to main branch this workflow will get execute. In the workflow we are setting up JDK 11 which will run on ubuntu and then execute our test script with the help of maven.
**Ensure that the branch you are using matches the one where you uploaded your code.** I uploaded my code to the main branch, so the displayed value is "main." If the branches don't match, it won't function properly and the build will fail.

3.Once you've pasted the code, commit the changes. Then, navigate to the action tab where you will find the workflow you created. You'll notice that the workflow is set to run automatically. Once it finishes running, you will receive an error message. If you select the workflow with the error, you will be able to view the error message or information it is displaying.


Okay, How can I resolve this problem?
Navigate to the Repository Settings, Action → General → Check Read and Write Permissions option, and save the changes.


4.Next, navigate back to the Action tab and select the workflow. Then, click on it to rerun the job that previously failed. You will be able to observe the build process completing successfully.


5.Now, when you open the code tab of your repository, you will find two branches. Initially, there was only one branch. The second branch is for the GitHub pages of the report, where it stores the report and its history. To proceed further, go to the settings of your repository and then click on "Pages." Choose the gh-pages branch and save your selection by clicking on the save button.

6.If you go to the action tab, you will find that the "pages build and deployment" workflow is currently running. Once the deployment is successful, click on the workflow, and you will find the link to your GitHub pages where the allure report is available. By clicking on that link, you will be able to view the report.



If you now add test cases and push the new code to your repository, after the workflow finishes running, you will be able to see the updated report at the same link.
Repository Link: https://github.com/SMShoron/allure-report-gh-page
[ci.yml code](t.ly/8be2o)
Thanks for reading this and if you find it useful please Like and Share it.
Happy learning 😊
You can connect to me on [*LinkedIn*](https://www.linkedin.com/in/sisadia/) | sadia |
1,482,722 | Crafting Microservices with NodeJS - Or How to Build a Servant from Scratch | Crafting Microservices with NodeJS - Or How to Build a Servant from Scratch If you're... | 0 | 2023-05-27T08:14:05 | https://dev.to/shubhamt619/crafting-microservices-with-nodejs-or-how-to-build-a-servant-from-scratch-1hkk | microservices, node, webdev, beginners | # Crafting Microservices with NodeJS - Or How to Build a Servant from Scratch
If you're reading this article, I presume that you've been smitten by the microservices charm and you're brave enough to tinker with it. Kudos, my friend. For those who just stumbled upon this article and are wondering, "What the heck is a microservice?" Well, microservices architecture is like having an army of small, efficient minions, each skilled at a specific task, as opposed to having a large, bulky, and clumsy monolith that tries to juggle everything (and often drops the ball).
Alright, enough of chit-chat. Let's roll up our sleeves and start crafting our very own microservice using NodeJS, the hip language that's as cool as the other side of the pillow. We'll be building a simple microservice that accepts a POST request with a JSON body of text and spits out the character count. Revolutionary? Nah. Informative? Absolutely.
## 1. The Shopping List
To start with, we need to install NodeJS and npm (Node Package Manager). They're like Batman and Robin in the world of JavaScript. If you haven't installed these, the internet is full of tutorials. Pick one and follow it. Now let's get some more tools, because you can't build a house with just a hammer.
- `Express` - A minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications. Basically, the bread and butter of our microservice sandwich.
- `Body-parser` - A piece of middleware that helps us process JSON payloads. Think of it as a language interpreter for our service.
To install these dependencies, open your terminal and play the role of a npm magician.
```bash
npm install express body-parser --save
```
## 2. Setting Up the Boilerplate
We have our tools. Now, let's start putting things together. We'll begin by setting up a simple server. Create a new file called `app.js` and put the following code in it:
```javascript
const express = require('express');
const app = express();
const bodyParser = require('body-parser');
app.use(bodyParser.json());
app.listen(3000, () => {
console.log('Microservice listening on port 3000. Be there or be square!');
});
```
Run the script using `node app.js` and if you see the console message, congrats, your server is alive. It doesn't do anything right now, but hey, Rome wasn't built in a day.
## 3. Building the Microservice - 'The Character Counter'
Let's start putting this microservice together. What we want is a POST endpoint that accepts JSON data and returns the number of characters in the text.
Add the following code to `app.js`:
```javascript
app.post('/count', (req, res) => {
const text = req.body.text;
if (!text) {
res.status(400).json({ error: 'No text provided. What do you expect me to count?' });
return;
}
const count = text.length;
res.status(200).json({ text: `Your text is ${count} characters long. Such a novel you've written there!` });
});
```
Voila! You have your first microservice. All you need to do now is to start the server using `node app.js` and send a POST request to `http://localhost:3000/count` with a JSON body of `{"text": "your text here"}`. What's that you ask? How to send a POST request? You can use Postman, curl, or any other tool that makes
you feel like a wizard.
## 4. Pat Yourself on the Back
Well, there you have it. A simple NodeJS microservice that doesn't do much but does it well. So, go ahead, crack open a soda, bask in the glory of your success, and start thinking about the next microservice you want to build. After all, one minion is never enough.
## Conclusion
Now that you've dipped your toes in the world of microservices, remember that the concept extends far beyond this simple example. You have databases to connect, authentication to manage, multiple services to orchestrate, and yes, bugs to squash. But hey, every giant leap begins with a small step, and you've just taken yours. Happy coding!
| shubhamt619 |
1,483,016 | Easy React data fetching with the new `use()` hook | React.use() is still an unstable API. For more information check out the support for promises React... | 0 | 2023-05-27T15:38:21 | https://chiubaca.com/easy-react-data-fetching-with-use-16jg | react, webdev, nextjs | > `React.use()` is still an unstable API. For more information check out the [support for promises React RFC](https://github.com/acdlite/rfcs/blob/first-class-promises/text/0000-first-class-support-for-promises.md) . At the time of writing this, you can only test this API in Next.js 13.
OK with that disclaimer out the way, lets talk about how `use()` works .
**In short, this new hook will let you you run asynchronous code in your client-side react components**. You can think of it as the React version of `async` / `await`.
To appreciate why it's going to be a game changer for client side data-fetching, lets compare the old-way of doing a client-side data fetch using `useEffect()` with `use()`.
```js
// returns an array of user objects e.g:
// [ {id:1 name: "Leanne Graham"}, {id:1 name: "Ervin Howell"}]
export const fetchUsers = async () => {
const resp = await
fetch("https://jsonplaceholder.typicode.com/users");
const json = await resp.json();
return json;
};
```
```jsx
'use client'
import { useState, useEffect } from "react";
import { fetchUsers } from "@/fetch-users";
export const OldDataFetchData = () => {
const [usersData, setUsersData] = useState(null);
useEffect(() => {
if (!usersData) {
const startFetchingData = async () => {
const users = await fetchUsers();
setUsersData(users);
};
startFetchingData();
}
}, [usersData]);
if (!usersData) {
return <>Loading...</>;
}
return (
<>
{usersData.map((u) => (
<div key={u.id}>{u.name} </div>
))}
</>
);
};
```
This is idiomatic code for most React developers. But for those less familiar with React and `useEffect` you may ask all sort of questions like:
- Why cant the function you pass into `useEffect` be `async`?
- Why cant the whole React component be `async`?
- Why do you need to check `userData` before calling `startFetchingData()`?
There are technical answers to all of these,but this is not a `useEffect` tutorial so a quick response would be - because, React. 😅
My point is, this is not intuitive code to read but we React developers have become accustomed to the rules and quirks of `useEffect`.
Now lets see how can re-write this with `use()` .
```jsx
'use client'
import { use } from "react";
import { fetchUsers } from "@/fetch-users";
export const NewDataFetchData = () => {
const usersData = use(fetchUsers());
return (
<>
{usersData.map((u) => (
<div key={u.id}>{u.name} </div>
))}
</>
);
};
```
Thats's it!
Ok I lied.
A little bit more code is required If you want to handle a loading state. In the parent server component that uses this client component, you can wrap it in `Suspense` which can accept a React component in its `fallback` prop.
```jsx
// ./app/page.jsx
import { Suspense } from "react";
import { NewDataFetchData } from "@/components/NewDataFetch";
export default function Home() {
return (
<main>
<Suspense fallback={<>Loading...</>}>
<NewDataFetchData />
</Suspense>
</main>
);
}
```
Overall, much more elegant to read and write.
---
An interesting property of the `use` hook is that it can be called conditionally 🤯.
This is a big deal because we've traditionally never been able to use[hooks conditionally](https://react.dev/warnings/invalid-hook-call-warning#breaking-rules-of-hooks).
Here is an example of how this could look taken from the [rfc](https://github.com/acdlite/rfcs/blob/first-class-promises/text/0000-first-class-support-for-promises.md#example-use-in-client-components-and-hooks).
```jsx
function Note({id, shouldIncludeAuthor}) {
const note = use(fetchNote(id));
let byline = null;
if (shouldIncludeAuthor) {
const author = use(fetchNoteAuthor(note.authorId));
byline = <h2>{author.displayName}</h2>;
}
return (
<div>
<h1>{note.title}</h1>
{byline}
<section>{note.body}</section>
</div>
);
}
```
---
I'm excited for this hook to become stable but I have mixed emotions.
On one hand, the ergonomics of data fetching will be _so much_ better and we may no longer need to rely of a third party libraries like [TanStack query](https://tanstack.com/query/latest).
On the other hand, our mental model of hooks needs to adjust when using `use()`, maybe this is not a big deal, but I can see this being a point of confusion for beginner React devs.
If you want to test out the `use()` for yourself, feel free to fork this codesandbox which is running the code examples above.
{% codesandbox quirky-ishizaka-jg31he %} | chiubaca |
1,483,172 | Promesas con async y await | Hay dos formas de manejar las promesas en JavaScript una es con then que ya lo explique en un... | 0 | 2023-05-27T20:20:55 | https://dev.to/ulisesserranop/promesas-con-async-y-await-53gc | javascript, programming, webdev, spanish | Hay dos formas de manejar las promesas en JavaScript una es con then que ya lo explique en un articulo anterior. Y otra es la que voy a explicar a continuación que es async y await. En este ejemplo se realiza una petición a una API fake.

El detalle es que el resultado de tal petición se obtiene dentro del segundo then ahora si se quisiera detalles del primer producto de esa categoría se tendría que hacer otro fetch a la API, y así sucesivamente a estos then se les conoce como callback hell. Llega un momento en donde la lógica se vuelve anidada, si en algún momento existe un error se va producir un efecto cascada. Ahora con async y await se va refactorizar esto y quedaría de la siguiente manera.

Cosas diferentes que hay que notar es que ahora se realiza dentro de una función, y se le antepone la palabra reservada async, ahora dentro del cuerpo de la función se realizan las peticiones con fetch pero se le antepone la palabra reservada await. Esto quiere decir va a esperar el resultado de la promesa que devuelve fetch para continuar con las siguientes líneas, de la misma manera con la función json() que devuelve una promesa y esta trae consigo un arreglo o objeto que esta listo para manipular con el lenguaje. Con async y await se vuelve mas limpio el código y fácil de leer. En cuanto a cachar los errores que se pudieran producir a lo largo de las promesas lo encerramos dentro de un bloque try -catch.

Si este articulo te gusto o aprendiste algo nuevo que no sabías te agradecería mucho que lo compartieras, me ayuda bastante. Gracias. | ulisesserranop |
1,483,307 | Understanding Lists, sets and tuples in Python. | Lists are one of the most commonly used data structures in Python. They are used to store... | 0 | 2023-05-28T01:09:11 | https://dev.to/bansikah/understanding-lists-sets-and-tuples-in-python-nph | beginners, programming, datastructure, python |

Lists are one of the most commonly used data structures in Python. They are used to store collections of data, Such as a list of numbers or a list of names. In this article, we will explore the basics of list in Python and learn how to work with them.
## Creating a collection
We will start with collections: a collection is a single variable used to store multiple values.
```
#Collection = single "variable" used to store multiple values
# List = [] ordered and changeable. Duplicates OK.
# Set = {} unordered and immutable but Add/Remove OK. No duplicates.
# Tuple = () ordered and unchangeable. Duplicates OK. Faster
fruits = ["apple", "orange", "banana", "coconut"]
print(fruits)
```
The collection of fruits that is created above will give you the following output.
```
['apple', 'orange', 'banana', 'coconut']
```
To access the elements that is found within your collection, you can use the index operator. Unlike we do with strings, the first element has an index of [0]. Considering the example below.
```
fruits = ["apple", "orange", "banana", "coconut"]
print(fruits[0])
```
The output will be:
```
apple
```
Because it the first element of the collection with an index of `[0]`. The same goes for all of the list element if we change the index from `[0]` to `[2]`, we have banana as our output.
```
fruits = ["apple", "orange", "banana", "coconut"]
print(fruits[2])
```
Output:
```
banana
```
With `indexes`, you could set the range of the output you would like to have. For example.
```
fruits = ["apple", "orange", "banana", "coconut"]
print(fruits[0:3])
```
It gives you the output.
```
['apple', 'orange', 'banana']
```
Also, you can change your `indexes` the way you want to give you the output you want. For example.
```
fruits = ["apple", "orange", "banana", "coconut"]
print(fruits[::2])
```
You can also iterate through your collection using a `for loop`.
```
fruits = ["apple", "orange", "banana", "coconut"]
# print(fruits[0])
for x in fruits:
print(x)
```
Will give the output:
```
apple
orange
banana
coconut
```
or for a better understanding you can change x to anything that will be easy for you to understand and print it out.
There are some functions we can also use to manipulate our collections.
```
fruits = ["apple", "orange", "banana", "coconut"]
print(dir(fruits))
# print(fruits[0])
# for x in fruits:
# print(x)
```
using the `dir()` function.
```
['__add__', '__class__', '__class_getitem__', '__contains__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']
```
Also, we have a `help() `function which is used to describe a method.
```
Help on list object:
class list(object)
| list(iterable=(), /)
|
| Built-in mutable sequence.
|
| If no argument is given, the constructor creates a new empty list.
| The argument must be an iterable if specified.
|
| Methods defined here:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __delitem__(self, key, /)
| Delete self[key].
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(...)
| x.__getitem__(y) <==> x[y]
|
| __gt__(self, value, /)
| Return self>value.
|
| __iadd__(self, value, /)
| Implement self+=value.
|
| __imul__(self, value, /)
| Implement self*=value.
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __repr__(self, /)
| Return repr(self).
|
| __reversed__(self, /)
| Return a reverse iterator over the list.
|
| __rmul__(self, value, /)
| Return value*self.
|
| __setitem__(self, key, value, /)
| Set self[key] to value.
|
| __sizeof__(self, /)
| Return the size of the list in memory, in bytes.
|
| append(self, object, /)
| Append object to the end of the list.
|
| clear(self, /)
| Remove all items from list.
|
| copy(self, /)
| Return a shallow copy of the list.
|
| count(self, value, /)
| Return number of occurrences of value.
|
| extend(self, iterable, /)
| Extend list by appending elements from the iterable.
|
| index(self, value, start=0, stop=9223372036854775807, /)
| Return first index of value.
|
| Raises ValueError if the value is not present.
|
| insert(self, index, object, /)
| Insert object before index.
|
| pop(self, index=-1, /)
| Remove and return item at index (default last).
|
| Raises IndexError if list is empty or index is out of range.
|
| remove(self, value, /)
| Remove first occurrence of value.
|
| Raises ValueError if the value is not present.
|
| reverse(self, /)
| Reverse *IN PLACE*.
|
| sort(self, /, *, key=None, reverse=False)
| Sort the list in ascending order and return None.
|
| The sort is in-place (i.e. the list itself is modified) and stable (i.e. the
| order of two equal elements is maintained).
|
| If a key function is given, apply it once to each list item and sort them,
| ascending or descending, according to their function values.
|
| The reverse flag can be set to sort in descending order.
|
| ----------------------------------------------------------------------
| Class methods defined here:
|
| __class_getitem__(...) from builtins.type
| See PEP 585
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
```
If you need the length of the elements of your collection, you can use the `len()` function.
```
fruits = ["apple", "orange", "banana", "coconut"]
print(len(fruits))
```
will print the output:
```
4
```
Giving you the exact length of your collection.
Using the `“in”` operation we could find a value within our collection.
```
fruits = ["apple", "orange", "banana", "coconut"]
print("apple" in fruits)
```
will print true as an output because `apple` is within our collection.
```
true
```
Now what if you try an element which is not within our collection? For example, `pineapple`, it prints `false`. Because `pineapple` is not found within our collection.
## Creating a List.
A `list` in python is an `ordered` and `changeable` collection of data objects. Unlike arrays, which can contain a mixture of a single type, a list can contain a mixture of objects. With list they are `ordered` and `changeable`, and they `accept duplicates`.
We can change one of our values or elements after we create our list. For example:
```
fruits = ["apple", "orange", "banana", "coconut"]
fruits[0] = 'pineapple'
for x in fruits:
print(x)
```
Results:
```
pineapple
orange
banana
coconut
```
You can see that our first element is no longer `apple`, it has been changed from `apple` to `pineapple`.
You can reassign the index by changing from `fruits[0]` to `fruits[2]` and `pineapple` will replace that element of the `index [2]` which is `banana`.
Let us cover Some of the methods that are found in a list.
The first is `append()`.
```
fruits = ["apple", "orange", "banana", "coconut"]
fruits.append("pinapple")
print(fruits)
```
Will `add` `pineapple` at the end of our list of fruits. See the results below:
```
['apple', 'orange', 'banana', 'coconut', 'pinapple']
```
To remove an element in our list, we will use the `remove()` method.
```
fruits = ["apple", "orange", "banana", "coconut"]
fruits.remove("apple")
print(fruits)
```
```
Results:
['orange', 'banana', 'coconut', 'pinapple']
```
and apple has been removed from our list.
To insert and element in a list, you can use the `insert(index,“element”)` method. We can insert a value at a given index. For example:
```
fruits = ["apple", "orange", "banana", "coconut"]
fruits.insert(0,"pinapple")
print(fruits)
```
Will output:
```
['pineapple', 'apple', 'orange', 'banana', 'coconut']
```
putting `pineapple` at the first position which has an `index` of `0`.
Also, we have a `sort()` method, which is used to sort a list in alphabetical order.
```
fruits = ["apple", "orange", "banana", "coconut"]
fruits.sort()
print(fruits)
```
we have,
```
['apple', 'banana', 'coconut', 'orange']
```
Now we have our list in alphabetical order.
To reverse a list, you can use the `reverse()` method.
```
fruits = ["apple", "orange", "banana", "coconut"]
fruits.reverse()
print(fruits)
```
Output:
```
['coconut', 'banana', 'orange', 'apple']
```
We now have our list in a reversed order. However, these elements are not reversed in alphabetical order instead they are replaced in the way we placed them. If you want to reverse in alphabetical order, you can first `sort()` and then `reverse()`.
To clear all the elements of a list, you can use the `clear() `method.
```
fruits = ["apple", "orange", "banana", "coconut"]
fruits.clear()
print(fruits)
```
We can also return the index of a value or an element.
```
fruits = ["apple", "orange", "banana", "coconut"]
fruits.clear()
print(fruits.index("apple"))
print(fruits)
```
and we have our index returned:
```
0
['apple', 'orange', 'banana', 'coconut']
```
`Zero [0]` which is the index of `apple`.
We can count the amount of time a value is found within a list because duplicates are Ok.
```
fruits = ["apple","banana", "orange", "banana", "coconut"]
print(fruits.count("banana"))
print(fruits)
```
We can see that there are 2 bananas in the list:
```
2
['apple', 'banana','orange', 'banana', 'coconut']
```
## Creating sets.
Sets are `unordered` and` immutable`, but `Add/Remove` is ok. `No duplicates`.
To create a set, we use `{}` instead of `[]`.
```
fruits = {"apple", "orange", "banana", "coconut"}
print(fruits)
```
A set is unordered as we can see below.
```
{'orange', 'coconut', 'banana', 'apple'}
```
To display the all the attributes and methods of a set, you can use the `dir()` function.
```
fruits = {"apple", "orange", "banana", "coconut"}
print(dir(fruits))
```
```
['__and__', '__class__', '__class_getitem__', '__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__iand__', '__init__', '__init_subclass__', '__ior__', '__isub__', '__iter__', '__ixor__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__or__', '__rand__', '__reduce__', '__reduce_ex__', '__repr__', '__ror__', '__rsub__', '__rxor__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__xor__', 'add', 'clear', 'copy', 'difference', 'difference_update', 'discard', 'intersection', 'intersection_update', 'isdisjoint', 'issubset', 'issuperset', 'pop', 'remove', 'symmetric_difference', 'symmetric_difference_update', 'union', 'update']
{'apple', 'orange', 'coconut', 'banana'}
```
For an in-depth description of these methods, you can use the `help()` function.
```
class set(object)
| set() -> new empty set object
| set(iterable) -> new set object
|
| Build an unordered collection of unique elements.
|
| Methods defined here:
|
| __and__(self, value, /)
| Return self&value.
|
| __contains__(...)
| x.__contains__(y) <==> y in x.
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __gt__(self, value, /)
| Return self>value.
|
| __iand__(self, value, /)
| Return self&=value.
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __ior__(self, value, /)
| Return self|=value.
|
| __isub__(self, value, /)
| Return self-=value.
|
| __iter__(self, /)
| Implement iter(self).
|
| __ixor__(self, value, /)
| Return self^=value.
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __or__(self, value, /)
| Return self|value.
|
| __rand__(self, value, /)
| Return value&self.
|
| __reduce__(...)
| Return state information for pickling.
|
| __repr__(self, /)
| Return repr(self).
|
| __ror__(self, value, /)
| Return value|self.
|
| __rsub__(self, value, /)
| Return value-self.
|
| __rxor__(self, value, /)
| Return value^self.
|
| __sizeof__(...)
| S.__sizeof__() -> size of S in memory, in bytes
|
| __sub__(self, value, /)
| Return self-value.
|
| __xor__(self, value, /)
| Return self^value.
|
| add(...)
| Add an element to a set.
|
| This has no effect if the element is already present.
|
| clear(...)
| Remove all elements from this set.
|
| copy(...)
| Return a shallow copy of a set.
|
| difference(...)
| Return the difference of two or more sets as a new set.
|
| (i.e. all elements that are in this set but not the others.)
|
| difference_update(...)
| Remove all elements of another set from this set.
|
| discard(...)
| Remove an element from a set if it is a member.
|
| If the element is not a member, do nothing.
|
| intersection(...)
| Return the intersection of two sets as a new set.
|
| (i.e. all elements that are in both sets.)
|
| intersection_update(...)
| Update a set with the intersection of itself and another.
|
| isdisjoint(...)
| Return True if two sets have a null intersection.
|
| issubset(...)
| Report whether another set contains this set.
|
| issuperset(...)
| Report whether this set contains another set.
|
| pop(...)
| Remove and return an arbitrary set element.
| Raises KeyError if the set is empty.
|
| remove(...)
| Remove an element from a set; it must be a member.
|
| If the element is not a member, raise a KeyError.
|
| symmetric_difference(...)
| Return the symmetric difference of two sets as a new set.
|
| (i.e. all elements that are in exactly one of the sets.)
|
| symmetric_difference_update(...)
| Update a set with the symmetric difference of itself and another.
|
| union(...)
| Return the union of sets as a new set.
|
| (i.e. all elements that are in either set.)
|
| update(...)
| Update a set with the union of itself and others.
|
| ----------------------------------------------------------------------
| Class methods defined here:
|
| __class_getitem__(...) from builtins.type
| See PEP 585
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
```
And the other functions are almost similar to those of a set. We can’t change a value of a set, but we could add or remove elements. Let’s use the `add()` method to add an element to our set.
```
fruits = {"apple", "orange", "banana", "coconut"}
print(fruits.add("mango"))
print(fruits)
```
To have:
```
None
{'mango', 'banana', 'apple', 'orange', 'coconut'}
```
and now `“mango”` is added to our set.
Also, we can remove an element using the `remove()` method.
```
fruits = {"apple", "orange", "banana", "coconut"}
print(fruits.remove("apple"))
print(fruits)
```
and now the `apple` is removed.
```
{'mango', 'banana','orange', 'coconut'}
```
The `pop()` method will remove the first value from the list.
```
fruits = {"apple", "orange", "banana", "coconut"}
fruits.pop()
print(fruits)
```
```
{'apple', 'banana', 'coconut'}
```
and the `clear()` method is used to clear all elements from the set.
```
fruits = {"apple", "orange", "banana", "coconut"}
fruits.clear()
print(fruits)
```
Now lastly let’s talk about tuples.
## Creating a tuple.
A `tuple` is created same as a set but `()` are used instead of `{}`, it is `ordered` and `unchangeable`. `Duplicates` OK, Faster (that is why is more preferable to use tuples than collections. So let us see the example below.
```
fruits = ("apple", "orange", "banana", "coconut")
print(fruits)
```
Results:
```
('apple', 'orange', 'banana', 'coconut')
```
and also, the `dir()` function is used to see its attributes and methods.
```
fruits = ("apple", "orange", "banana", "coconut")
print(dir(fruits))
print(fruits)
```
```
['__add__', '__class__', '__class_getitem__', '__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'count', 'index']
('apple', 'orange', 'banana', 'coconut')
```
use the `help()` method to get more insight about these attributes.
```
fruits = ("apple", "orange", "banana", "coconut")
print(dir(fruits))
print(help(fruits)
```
```
class tuple(object)
| tuple(iterable=(), /)
|
| Built-in immutable sequence.
|
| If no argument is given, the constructor returns an empty tuple.
| If iterable is specified the tuple is initialized from iterable's items.
|
| If the argument is a tuple, the return value is the same object.
|
| Built-in subclasses:
| asyncgen_hooks
| UnraisableHookArgs
|
| Methods defined here:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(self, key, /)
| Return self[key].
|
| __getnewargs__(self, /)
|
| __gt__(self, value, /)
| Return self>value.
|
| __hash__(self, /)
| Return hash(self).
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __repr__(self, /)
| Return repr(self).
|
| __rmul__(self, value, /)
| Return value*self.
|
| count(self, value, /)
| Return number of occurrences of value.
|
| index(self, value, start=0, stop=9223372036854775807, /)
| Return first index of value.
|
| Raises ValueError if the value is not present.
|
| ----------------------------------------------------------------------
| Class methods defined here:
|
| __class_getitem__(...) from builtins.type
| See PEP 585
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
```
Use the `len() `function to find the length, the `“in”` method to check if value exist in tuple, the `index()` function to find the index, the `.count()` function to count the number of elements in the tuple.
```
fruits = ("apple", "orange","coconut" ,"banana", "coconut")
print(fruits.count("coconut"))
print(fruits)
```
Results:
```
2
('apple', 'orange', 'coconut', 'banana', 'coconut')
```
We can also use the `for loop` to iterate through our tuple.
```
fruits = ("apple", "orange","coconut" ,"banana", "coconut")
for x in fruits:
print(x)
```
```
apple
orange
coconut
banana
coconut
```
## Summary.
Collections are single variables that are used to store multiple values.
List are ordered and changeable, and can be duplicate, the are created using [];
Sets are unordered and immutable, but we can add or remove from a set, and they do not accept duplicates. Created using {}.
Tuples are ordered and unchangeable, it allows duplicate values and the are much faster. Created using ().
Ok that’s is all about this article, i will be coming up with another article on how to understand dictionaries in python. Please do well to ask your questions in the comment section. Adios
| bansikah |
1,483,619 | Angular Universal vs React Server-Side Rendering: Which One Should You Choose? | Discover the benefits and drawbacks of Angular Universal and React Server-Side Rendering for boosting performance and SEO compatibility in modern web development. | 0 | 2023-05-28T11:20:01 | https://angulardive.com/blog/angular-universal-vs-react-server-side-rendering-which-one-should-you-choose/ | ## Angular Universal vs React Server-Side Rendering: Which One Should You Choose?
Server-Side Rendering (SSR) is an important technique for modern web development. With the rise of Single Page Applications (SPAs), SSR can help boost the performance of these applications, particularly in terms of search engine optimization (SEO) and user experience (UX).
When it comes to choosing a framework for your SSR needs, two popular options in the frontend development community are Angular Universal and React Server-Side Rendering. In this article, we will explore the benefits and drawbacks of each option to help you determine which one is the best fit for your project.
### What is Angular Universal?
Angular Universal is a server-side rendering solution for the Angular framework. It allows developers to render their Angular applications on the server before sending them to the client, providing a fully rendered HTML document to the browser rather than a bare-bones template that JavaScript code will fill in later.
The end result is a faster initial load time, better SEO compatibility, and user experience overall.
### Benefits of Angular Universal
One of the biggest benefits of using Angular Universal is the potential for improved SEO. When search engines crawl your website, they typically do not execute JavaScript. This means that if your application relies heavily on JavaScript to render content, search engines may not be able to index all of your content, resulting in lower search engine ranking and visibility.
By using Angular Universal, you can ensure that search engines are able to crawl and index all of your content, helping to improve your search engine ranking and visibility. This can be particularly important for businesses that rely heavily on search engine traffic to drive sales.
Another benefit of Angular Universal is improved performance. Because the server is rendering the Angular application before sending it to the client, the initial load time is much faster than it would be if the client had to wait for JavaScript to finish rendering the content. This can help improve user experience overall, particularly on slow internet connections or older devices.
### Drawbacks of Angular Universal
One of the main drawbacks of using Angular Universal is the increased complexity of setup and maintenance. While Angular Universal provides a number of tools to simplify this process, it can still be a more difficult process to set up and maintain than some other SSR solutions.
This can be particularly problematic for smaller teams or those with less experience in server-side development. However, for larger teams or those with more experience, the benefits of using Angular Universal may outweigh these drawbacks.
### What is React Server-Side Rendering?
React Server-Side Rendering is a solution for rendering React applications on the server-side. It allows developers to generate a complete HTML document on the server before sending it to the client, providing a faster initial load time and improved SEO and accessibility.
Like Angular Universal, React Server-Side Rendering allows for improved SEO compatibility, as search engines are able to crawl and index all of the content on your website. It also improves UX by providing a faster initial load time, particularly for users on slow internet connections or older devices.
### Benefits of React Server-Side Rendering
One of the biggest benefits of using React Server-Side Rendering is the improved performance it provides. By rendering the React application on the server before sending it to the client, the initial load time is significantly reduced, resulting in a better user experience overall.
Another benefit of React Server-Side Rendering is the ability to reuse the same code on both the client and server. This can simplify the overall development process and reduce the amount of code required, making it easier to maintain and update the application over time.
### Drawbacks of React Server-Side Rendering
One of the main drawbacks of using React Server-Side Rendering is that it can be more difficult to implement than other SSR solutions. While React provides a number of tools to simplify this process, it still requires a significant amount of setup and configuration to get up and running.
This can be particularly problematic for smaller teams or those with less experience in server-side development. However, for larger teams or those with more experience, the benefits of using React Server-Side Rendering may outweigh these drawbacks.
### Conclusion
When it comes to choosing between Angular Universal and React Server-Side Rendering, there are a number of factors to consider. Both solutions provide similar benefits, such as improved SEO compatibility and faster initial load times. However, the decision ultimately comes down to your team's specific needs and expertise.
For smaller teams or those with less experience in server-side development, React Server-Side Rendering may be the better choice due to its simpler implementation. For larger teams or those with more experience, Angular Universal may be the better choice due to its more advanced capabilities and flexibility.
Ultimately, the decision comes down to your specific use case and which solution will provide the best overall experience for your users.
| josematoswork | |
1,483,739 | The Power of Rust in Automotive Software Development | The automotive industry is experiencing a significant transformation driven by the integration of... | 0 | 2023-05-30T06:37:48 | https://blog.chetanmittaldev.com/the-power-of-rust-in-automotive-software-development | ---
title: The Power of Rust in Automotive Software Development
published: true
date: 2023-05-28 09:10:39 UTC
tags:
canonical_url: https://blog.chetanmittaldev.com/the-power-of-rust-in-automotive-software-development
---

The automotive industry is experiencing a significant transformation driven by the integration of software in vehicles.
From safety-critical systems to in-car entertainment and connectivity, software development has become a key driver of innovation in the automotive sector.
Rust, a programming language renowned for its emphasis on safety, performance, and reliability, is playing a pivotal role in this revolution.
In this article, we will explore the diverse applications of Rust in automotive software development and delve into how it effectively addresses the challenges encountered in this dynamic field.
## **Enhancing Safety and Reliability in Automotive Systems**
Safety-critical systems lie at the heart of automotive software development. The reliability and security of software directly impact the well-being of vehicle occupants and other road users.
While traditional languages like C and C++ have long been utilized in this domain, they come with inherent risks such as memory-related vulnerabilities and undefined behavior.
Rust provides a compelling solution by offering <mark>robust safety guarantees</mark> through features such as ownership and strict compile-time checks.
The <mark>ownership system in Rust</mark> enables precise control over memory allocation and deallocation, effectively eliminating common issues like null pointer dereferences and buffer overflows.
By enforcing these safety measures at compile-time, Rust mitigates risks and enhances the overall reliability and robustness of automotive software, instilling confidence in its performance even in critical scenarios.
## **Unlocking Performance Optimization Potential with Rust**
Performance optimization is a critical aspect of automotive software development, particularly in real-time systems where quick response times and efficient resource utilization are vital.
While languages like C and C++ have traditionally been favored for their low-level control and performance advantages, manual memory management poses challenges such as memory leaks and dangling pointers.
> Rust strikes an ideal balance between performance and safety.
Its <mark>ownership and borrowing system</mark> allows for efficient memory management without the need for explicit deallocation, mitigating common memory-related issues.
Furthermore, Rust's <mark>concurrency model</mark> ensures safe and efficient parallel execution, harnessing the full potential of multi-core processors.
This unique combination of control, efficiency, and safety makes Rust an excellent choice for developing high-performance automotive software that meets the stringent demands of modern vehicles.
## **Empowering Connectivity and In-Car Entertainment Experiences**
In the era of connected vehicles, software development for in-car entertainment systems and connectivity features presents its own set of challenges.
The software must seamlessly integrate with a wide array of hardware components, facilitate secure communication with external devices, and ensure reliability and data privacy.
<mark>Rust's reliability and robustness</mark> make it well-suited for addressing these challenges.
Its ownership and borrowing system, coupled with <mark>strong compile-time checks</mark>, help eliminate common bugs and vulnerabilities, enhancing data security and reducing the risk of potential breaches.
Rust's expressive type system empowers developers to create clear, maintainable code that effectively handles complex interactions between various components, ensuring seamless connectivity and enhanced in-car entertainment experiences.
Moreover, the <mark>extensive ecosystem of libraries and frameworks available</mark> in Rust provides specialized tools and resources tailored specifically for connectivity and entertainment applications in the automobile industry, facilitating rapid development and innovation.
## **Enabling Advanced Driver-Assistance Systems (ADAS)**
The development of Advanced Driver-Assistance Systems (ADAS) is revolutionizing the automotive industry, with the aim of improving vehicle safety and enhancing the driving experience.
ADAS systems rely heavily on software to interpret sensor data, make real-time decisions, and assist drivers in critical situations.
Rust's safety and reliability features make it an excellent choice for ADAS software development.
The ownership and borrowing system in Rust <mark>minimizes the risk of memory-related errors</mark>, ensuring stable and predictable behavior.
This is crucial in ADAS systems where precise and timely response is paramount.
Additionally, Rust's strong compile-time checks and expressive type system aid in developing robust and error-free code, enabling the creation of ADAS software that meets the stringent safety requirements of the automotive industry.
## **Supporting Over-the-Air Updates and Maintenance**
With the increasing complexity of automotive software, the ability to provide over-the-air updates and maintenance has become essential.
Software updates are necessary to address security vulnerabilities, introduce new features, and improve system performance.
However, ensuring the integrity and reliability of these updates poses challenges due to the critical nature of automotive systems.
Rust offers a secure and reliable platform for over-the-air updates and maintenance.
Its emphasis on memory safety and strong compile-time checks <mark>minimizes the risk of introducing bugs or vulnerabilities</mark> during the update process.
Furthermore, Rust's <mark>lightweight runtime and efficient code execution</mark> enable seamless and efficient updates, minimizing downtime for vehicles and enhancing the overall user experience.
By leveraging Rust, automotive manufacturers can ensure the timely delivery of updates and maintenance, keeping vehicles up-to-date and secure throughout their lifespan.
## **Driving Innovation Forward**
Rust's presence in the automotive industry is <mark>driving innovation</mark> and pushing the boundaries of what's possible in software development for vehicles.
By leveraging the safety, performance, and reliability advantages of Rust, software programmers can create cutting-edge automotive software that enhances safety, optimizes performance, and delivers seamless connectivity and entertainment experiences.
As the automotive landscape continues to evolve, Rust's role in shaping the future of automotive technology will only grow stronger, enabling the development of smarter, more efficient, and highly connected vehicles.
## **Conclusion**
By harnessing the unique features and advantages of Rust, software developers can create robust, secure, and high-performing automotive software that pushes the boundaries of innovation in the automobile industry, enabling the development of safer, more efficient, and highly connected vehicles.
Rust is not just making its importance felt in the automotive industry but is important in many other [use cases](https://blog.chetanmittaldev.com/10-best-use-cases-of-rust-programming-language-in-2023) as well such as [Industrial Automation](https://dev.to/chetanmittaldev/industrial-automation-made-safer-with-rust-a5g), [IoT Devices](https://dev.to/chetanmittaldev/build-effective-iot-solutions-by-unlocking-the-power-of-rust-lang-bmp), [Embedded Systems](https://dev.to/chetanmittaldev/powering-embedded-systems-development-with-rust-2gbb), [Robotics Industry](https://dev.to/chetanmittaldev/unleashing-the-power-of-rust-in-robotics-security-and-real-time-excellence-44fm), etc. | chetanmittaldev | |
1,484,680 | Mastering Readme Files | A Guide to Project Documentation | 0 | 2023-05-29T14:19:31 | https://dev.to/unnotedme/mastering-readme-files-1dmg | readme, documentation, project, git | ---
title: Mastering Readme Files
published: true
description: A Guide to Project Documentation
tags: readme, documentation, project, git
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2023-05-29 14:00 +0000
---
#### 📌 The Readme is a guide that contemplates the main purpose of your project, technologies used, some how-to's (such as how to install and how to use it), the license, and how others can contribute.
Readme files have the `.md` extensions, so it means they need to be using the Markdown standards. It's easy to learn and you can use [this editor](https://stackedit.io/) to write your file more easily.
Your file needs to be easily readable and organized in a way that you, and others, can understand and use as a resource while developing, testing and using your project. Use titles and sub-titles, divide it into sections and make it structural. Write short paragraphs, use bullet lists and highlight important content using **bold**.
Remember, your project's Readme is the first impression someone will have of your project, and you don't want it to be sloppy and badly written. So let's check the best way to create your Readme file!
## What does your README needs?
### Name
This is the name of the project. It can be the name you are giving to your product or what type of project it is.
### Description
Here you are going to add what your project does and what are you planning to add to it. Write shortly about what technologies you used and how you used them. This part is a summary of all you worked on, be brief once you will get into more details on the rest of your Readme document.
### Table of Contents
This is more to make everything more organized. The more you write, the harder is to find information. Create your table of contents and separate your sections in it.
### Technologies
It's time to add everything you used in your application: languages, libraries, APIs and more. Did you use it? You name it! This will help you and future contributors to keep track of everything used to build the project. Also, is a great way of sharing great tools with new learners.
### Install and Run
Here you will explain, step by step, how to install and run the development environment. Add the code, if necessary, and explain how to do it for the different types of operation systems, if needed.
### Usage
Hopefully, you developed the project thinking about your end-users needs. Now it's time for you to explain to them how to use it. I know you think your app is all intuitive, but it is great to make sure the end-user knows how to use the projects and all the features that might help them. You can do a small tutorial, a walkthrough or even, use bullet points with all the application functionalities.

### Contributing
If your project is open-source, you can add this section to show how other developers, testers, designers and users can contribute to it.
Add the how-tos:
- How to report a bug?
- How to make a pull request (PR)?
- How to get support?
- How to make a donation? (If you want to let people make donations for it.)
And also add some guides:
- Coding Style Guide
- Styling Guide
- Code of Conduct
This part can be a bit bigger than the others and you might want to separate it into different files.
### Acknowledgments
Add what inspired you to create the project (other repos, YouTubers, books, etc), what resources you used (documentation, websites, courses, etc) and who helped you build this project. I also suggest you tell a bit about yourself, once you developed the project and deserve the credit!
### License
Developers often forget about it, but adding a license is key to make sure it will show other developers the limits of the use of your software. To do so, you should add a file to your project called `LICENSE.md` or just `LICENSE`, where you will add the full copy of the license, with your name and year in it.
For the Readme file, just link the license under the license section.
## Extras
These are extra things you can add to your `README.md` file to make it look more complete. They are just a recommendation and you can do it if you want (most of it is just to make it look more aesthetically pleasing).
### Images
This will help your Readme feel less like a bunch of text. These are some ideas:
- Add your project logo right next to your project name.
- Add screenshots of your application in the usage section to demonstrate to the end user how your app works.
- Add a photo of yourself on the credits so people can identify you more easily (or an avatar that represents you - I do most of mine on the [Picrew](https://picrew.me/en) website).
### Status Badges
I'm not a big fan of this one but you can add status badges on your Readme using the [badges/shields](https://github.com/badges/shields) repo. This will illustrate, in a simplified way, your project status.

### Emojis
I am a big fan of using emojis in your Readme file. They help it become more visual and easy to read (and it looks cool 😎). Make sure you add emojis that make sense to the subject and do not overload your file like I do sometimes.
## Examples
Here is a repo list of great Readme files for you to get inspired:
[https://github.com/matiassingers/awesome-readme](https://github.com/matiassingers/awesome-readme)
And if you are lazy and want something ready to go, I made [this Readme](https://github.com/unnotedme/mastering-readme) file for you to copy and edit to your necessities. If you liked it, don't forget to give it a star! | unnotedme |
1,484,859 | What do you think about TDD (Test Driven Devolepment)? | When I started my career as a developer, I was flooded with many articles about best practices, and... | 0 | 2023-05-29T15:57:45 | https://dev.to/yohantsn/what-do-you-think-about-tdd-test-driven-devolepment-3bib | testing, programming, development, discuss | When I started my career as a developer, I was flooded with many articles about best practices, and one, in particular, caught my attention: the TDD.
In my case I used it a lot when projects were small and new, but now working on larger legacy code I can't use this concept.
My reflection here is around the real gain, and your experiences using that in different types of projects. | yohantsn |
1,485,231 | Curso de JavaScript Online e Gratuito com 10 Horas de Aprendizado | Explore o curso de programação gratuito “JavaScript do Zero” oferecido pela Trybe! Se você está... | 0 | 2023-05-30T02:01:52 | https://guiadeti.com.br/curso-de-javascript-online-e-gratuito/ | cursogratuito, bancodedados, css, cursosgratuitos | ---
title: Curso de JavaScript Online e Gratuito com 10 Horas de Aprendizado
published: true
date: 2023-05-30 00:06:48 UTC
tags: CursoGratuito,bancodedados,css,cursosgratuitos
canonical_url: https://guiadeti.com.br/curso-de-javascript-online-e-gratuito/
---

Explore o **curso de programação gratuito** “JavaScript do Zero” oferecido pela Trybe! Se você está interessado em aprender a programar e deseja se familiarizar com uma das linguagens mais populares do mundo, este curso de JavaScript online é perfeito para você!
Descubra se a programação é a área certa para você, escrevendo suas primeiras linhas de código e mergulhando nos conceitos básicos.
Com certificado de participação e 10 horas de conteúdo gratuito, este curso oferece mais de 40 exercícios com gabarito para consolidar seus aprendizados na prática. Não perca essa oportunidade de iniciar sua jornada na programação com o curso de JavaScript online da Trybe!
## Conteúdo
<nav><ul>
<li>
<a href="#curso-de-java-script-online">Curso de JavaScript Online</a><ul>
<li><a href="#aprendizado">Aprendizado</a></li>
<li><a href="#publico-alvo">Público-alvo</a></li>
<li><a href="#certificado">Certificado</a></li>
</ul>
</li>
<li><a href="#o-que-e-java-script">O que é JavaScript?</a></li>
<li>
<a href="#trybe">Trybe</a><ul><li><a href="#entrada-no-mercado-de-trabalho">Entrada no Mercado de Trabalho</a></li></ul>
</li>
<li><a href="#inscricoes">Inscrições</a></li>
<li><a href="#compartilhe">Compartilhe!</a></li>
</ul></nav>
## Curso de JavaScript Online
A Trybe apresenta o curso de programação gratuito “JavaScript do Zero”, como parte de seus cursos abrangentes de programação. Se você deseja aprender a programar e se familiarizar com uma das linguagens mais populares do mundo, esse curso de JavaScript online é ideal para você!

_Site Trybe – Curso de JavaScript online_
Ao participar do curso de JavaScript online, você terá a oportunidade de escrever suas primeiras linhas de código e descobrir se a programação é realmente a área certa para você.
Uma das principais vantagens desse curso é a flexibilidade de aprendizado, permitindo que você estude no seu próprio ritmo e horário. Não é necessário passar por um processo seletivo para se inscrever, tornando-o acessível para qualquer pessoa interessada em começar sua jornada na programação.
Ao concluir o curso de JavaScript online, você receberá um certificado de participação que pode ser utilizado para comprovar e compartilhar sua experiência com o mercado de trabalho.
### Aprendizado
Durante o curso JavaScript do Zero, você vai aprender **fundamentos da programação** e algoritmos, adquirir conhecimentos básicos de programação, lógica de programação para a criação de programas, abordar problemas complexos, dividindo-os em partes menores para resolvê-los de maneira eficaz e escrever suas primeiras linhas de código.
Esse **curso de JavaScript online** é projetado para fornecer 10 horas de conteúdo gratuito, abordando os conceitos básicos do JavaScript, juntamente com mais de 40 exercícios práticos que possuem gabarito para ajudar na fixação do conhecimento adquirido.
O **curso gratuito de JavaScript** aborda tópicos relevantes, como o potencial da linguagem JavaScript, variáveis, constantes e tipos primitivos, operadores e estruturas condicionais, pensamento computacional, arrays e estruturas de repetição, além de funções.
### Público-alvo
O conteúdo do JavaScript do Zero é projetado para atender às necessidades tanto daqueles que desejam ingressar em uma das áreas mais valorizadas do mercado de trabalho quanto daqueles que desejam ter um primeiro contato prático com a programação, mesmo sem experiência anterior.
Esse curso de JavaScript online também é indicado para aqueles que desejam fazer uma transição de carreira, mas ainda não sabem por onde começar, ou para aqueles que desejam revisar conceitos iniciais de programação antes de se aprofundar nos estudos
O **curso JavaScript do Zero** não requer conhecimento prévio nem possui pré-requisitos específicos. Tudo o que você precisa é ter curiosidade, vontade de aprender e comprometimento.
### Certificado
Para obter o certificado de conclusão do curso de JavaScript online, é necessário obter uma taxa de acerto de 80% ou mais em todos os quizzes (testes de múltipla escolha). Os aprovados receberão o certificado por e-mail em até 30 dias após a finalização de todos os quizzes.
Aproveite essa oportunidade única de iniciar sua jornada na programação e explorar o emocionante mundo do JavaScript com o curso JavaScript do Zero da Trybe. Inscreva-se agora mesmo e desbrave as possibilidades oferecidas por essa linguagem versátil e poderosa!
## O que é JavaScript?
JavaScript é uma linguagem de programação utilizada no desenvolvimento web, permitindo criar aplicativos interativos e dinâmicos. Com sintaxe flexível, é acessível a desenvolvedores de diferentes níveis.
A [linguagem de programação JavaScript](https://guiadeti.com.br/guia-tags/cursos-de-javascript/) possui recursos para [manipulação de elementos HTML](https://guiadeti.com.br/guia-tags/cursos-de-html/), validação de formulários e criação de animações. Com chamadas assíncronas, permite interações com servidores sem recarregar a página. Compatível com diversos navegadores e sistemas operacionais, é uma linguagem multiplataforma.
Em conjunto com HTML e CSS, forma a base das páginas web. Bibliotecas e [frameworks populares](https://guiadeti.com.br/guia-tags/cursos-de-framework/), como React e Angular, facilitam o desenvolvimento web avançado. A **linguagem Js** é essencial para criar experiências interativas e responsivas na web.
<iframe title="6 Motivos para Aprender JavaScript (JavaScript para Iniciantes)" width="1200" height="675" src="https://www.youtube.com/embed/8TwTqw9rl7Q?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
_Créditos: Canal Ilustra Dev_
## Trybe
A [Escola de Programação Online Trybe](https://www.betrybe.com/) é uma escola de tecnologia que oferece cursos de programação e desenvolvimento web com uma abordagem inovadora e prática.
A Trybe tem como objetivo principal formar profissionais qualificados e preparados para o mercado de trabalho na área de tecnologia. O diferencial da Trybe está na sua metodologia de ensino, que combina teoria e prática de forma intensiva.
Os alunos da Trybe têm acesso a um currículo completo e abrangente, que engloba desde os [fundamentos da programação](https://guiadeti.com.br/guia-tags/cursos-de-programacao/) até tópicos avançados, como desenvolvimento web, back-end, front-end, banco de dados, entre outros.
Os estudantes têm a oportunidade de trabalhar em projetos reais, simulando situações reais do mercado de trabalho e desenvolvendo habilidades práticas essenciais.
Os cursos da Trybe são altamente interativos e contam com a orientação de profissionais experientes do setor. Os alunos têm acesso a mentores e tutores que fornecem suporte e orientação individualizada ao longo de sua jornada de aprendizado.
A Trybe oferece um ambiente colaborativo, onde os estudantes podem interagir entre si, compartilhar conhecimentos e colaborar em projetos em equipe.
### Entrada no Mercado de Trabalho
Um aspecto importante da Trybe é seu compromisso com a empregabilidade dos alunos. A escola possui parcerias com empresas e organizações do setor de tecnologia, buscando facilitar a inserção dos alunos no mercado de trabalho.
A Trybe oferece suporte na busca por estágios e **oportunidades de emprego** , além de fornecer orientação e treinamento para entrevistas e processos seletivos.
A Trybe também valoriza a diversidade e a inclusão. A escola oferece **bolsas de estudo** para pessoas de grupos sub-representados na área de tecnologia, visando ampliar as oportunidades de acesso e contribuir para a formação de uma comunidade de profissionais mais diversa e inclusiva.
A empresa se destaca como uma escola de tecnologia comprometida em formar profissionais capacitados, prontos para enfrentar os desafios do mercado de trabalho.
## Inscrições
As [inscrições para o curso JavaScript do Zero](https://www.betrybe.com/curso-de-programacao-javascript-do-zero) devem ser feitas no site da Trybe.
## Compartilhe!
Gostou do conteúdo sobre o curso de JavaScript online? Então compartilhe com a galera!
O post [Curso de JavaScript Online e Gratuito com 10 Horas de Aprendizado](https://guiadeti.com.br/curso-de-javascript-online-e-gratuito/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,486,186 | Top Productivity CLI Tools I Use on Linux | Introduction Are you ready to unlock the power of the Linux command line? As an avid Arch... | 0 | 2023-05-30T19:47:38 | https://dev.to/ken_mwaura1/top-productivity-cli-tools-i-use-on-linux-jga | linux, tooling, productivity, cli | ## Introduction
Are you ready to unlock the power of the Linux command line? As an avid Arch Linux user, I have discovered a treasure trove of exceptional CLI tools that have significantly enhanced my experience. From boosting productivity to streamlining tasks, these top-notch tools have become my trusted companions. Join me as we delve into a hand-picked selection of command-line gems that will revolutionize your Linux journey. Get ready to embark on a path of efficiency, flexibility, and command-line mastery. Welcome to the world of Linux's finest CLI tools!
Although this article geared towards Linux users, most of the tools mentioned here are cross platform and can be used on Windows (WSL2) and MacOS as well. Feel free to try them out and let me know what you think.
## 1. [Bat](https://github.com/sharkdp/bat)
Bat is a cat clone with syntax highlighting and Git integration. It has helped me iimmensely when I want to view the contents of a file. It is a drop in replacement for cat and it has syntax highlighting for a lot of languages. It also has a pager mode which is enabled by default when output is piped to another command. It also has a lot of other features which you can read about in the [documentation](https://github.com/sharkdp/bat). In my case I have aliased it to cat in my .fish config file.
```shell
alias cat bat
```

## 2. [Fish Shell](https://github.com/fish-shell/fish-shell)
Fish is a acronyn for friendly interactive shell. It is a smart and user friendly shell for Unix-like operating systems like Linux. There are a lot of features that make it stand out from other shells like bash. It has a lot of features like autosuggestions, syntax highlighting, tab completions and a lot more. You can read more about it in the [documentation](https://fishshell.com/). I have been using it for a while now and I it configured to my liking. In terms of plugins I use [fisher](https://github.com/jorgebucaran/fisher) to manage my plugins. I have a couple of plugins that I use on a daily basis. I have listed them below:
- [z](https://github.com/jethrokuan/z)
- [fzf](https://github.com/junegunn/fzf)
- [zoxide](https://github.com/ajeetdsouza/zoxide)
- [exa](https://github.com/ogham/exa)
- [bat](https://github.com/sharkdp/bat)
- [tide theme](https://github.com/IlanCosman/tide)
- [nvm](https://github.com/jorgebucaran/nvm.fish)
## 3. [Fzf](https://github.com/junegunn/fzf)
Fzf is a command line fuzzy finder. It is a general purpose command line fuzzy finder. It can be used for a lot of things like searching for files, searching for processes, searching for git commits and a lot more. It is a very powerful tool and I use it on a daily basis. I have it aliased to ctrl + r in my **.fish** config file.
```shell
alias ctrl + r fzf
```

## 4. [Zoxide](https://github.com/ajeetdsouza/zoxide)
Zoxide is a blazing fast alternative to [z](). It is a tool that helps you navigate your filesystem faster. It is a tool that I use on a daily basis. It provides a lot of features that make it stand out from other tools. It has a lot of features like autojump, autoenv, z and a lot more. It is a tool that I use on a daily basis. I have it aliased to z in my **.fish** config file.
```shell
alias z zoxide
```

## 5. [Exa](https://github.com/ogham/exa)
Exa, a powerful file management tool, is gaining popularity as a modern replacement for the traditional "ls" command. With its impressive set of features and user-friendly interface, Exa offers a superior file listing experience. It supports colorized output, making it easier to identify file types at a glance. The built-in Git integration feature provides valuable insights into the status of files and directories within a repository. Additionally, Exa boasts a convenient tree view option, allowing users to visualize the directory structure in a hierarchical format.
Excitingly, Exa can be seamlessly integrated into the Fish shell environment by adding an alias to the .fish configuration file. This way, users can enjoy the enhanced functionality of Exa while using the familiar "ls" command in their daily workflow. Discover the potential of Exa and elevate your file management capabilities today. I have it aliased to ls in my **.fish** config file.
```shell
alias ls exa
```

## 6. [Tide](https://github.com/IlanCosman/tide)
The Tide theme for the Fish shell is an aesthetically pleasing and highly customizable option that can transform your command-line experience. With its clean and minimalist design, Tide brings a sense of elegance to your terminal. The theme combines a soothing color palette with well-defined prompt elements, enhancing readability and reducing visual clutter.
The Tide theme offers a range of customization options, allowing users to personalize their prompt appearance, such as adjusting colors, adding informative segments, and even incorporating Git status indicators. Whether you prefer a sleek and minimalistic look or want to add a touch of flair to your terminal, the Tide theme for Fish shell delivers a visually appealing and personalized command-line interface. Elevate your Fish shell experience with Tide and enjoy a fresh and stylish approach to your everyday workflow.
To install the Tide theme, simply clone the repository and run the install script. Then, add the theme to your .fish configuration file. I have it configured in my **.fish** config file.
```shell
tide configure
```

## 7. [Bpytop](https://github.com/aristocratos/bpytop)
Bpytop is a powerful and visually appealing resource monitor for the command-line interface. You can find the [bpytop project on GitHub](https://github.com/aristocratos/bpytop). The repository provides detailed information about bpytop, including installation instructions, usage examples, and documentation.
Bpytop offers a feature-rich and customizable interface for monitoring system resources such as CPU usage, memory usage, network activity, disk I/O, and more. It provides real-time graphs, color-coded statistics, and a user-friendly layout, making it easy to visualize and analyze system performance at a glance. With its intuitive keybindings, you can navigate through different sections and access detailed information about specific processes or system components.
Additionally, bpytop offers a variety of customization options, allowing you to personalize the appearance and behavior of the resource monitor according to your preferences. Whether you are a system administrator, developer, or power user, bpytop is a valuable tool for monitoring and understanding system performance.

## 8. [Neofetch](https://github.com/dylanaraps/neofetch)
Neofetch is a popular command-line tool that displays system information in a visually appealing and informative manner. With Neofetch, you can quickly retrieve details about your operating system, kernel version, CPU, memory, and other hardware components. It also fetches information about the desktop environment or window manager, providing a comprehensive overview of your system configuration.
Neofetch offers various customization options, allowing you to personalize the displayed information, choose ASCII art logos, and even integrate additional functionality using plugins.

## 9. [Fd](https://github.com/sharkdp/fd)
Fd, the efficient and user-friendly filesystem search tool that offers a simple and fast alternative to the traditional "find" command. Designed to provide a streamlined search experience, fd prioritizes speed and ease of use while offering sensible defaults for most common use cases. While it may not encompass all the advanced features of "find," fd's optimized approach delivers quick and intuitive file searching. Discover fd on [GitHub](https://github.com/sharkdp/fd) and enhance your filesystem search capabilities with this lightweight and versatile tool that meets the majority of user requirements.

## 10. [lazydocker](https://github.com/jesseduffield/lazydocker)
Lazydocker is a terminal UI for the Docker engine that offers a convenient and user-friendly alternative to the Docker CLI. With its intuitive interface and streamlined workflow, lazydocker simplifies the process of managing Docker containers, images, volumes, and networks. It provides a comprehensive overview of your Docker environment, allowing you to monitor and control your containers with ease.
Lazydocker offers a range of features, including the ability to view logs, attach to running containers, and execute commands. Additionally, it provides a built-in terminal emulator, allowing you to run commands directly from the lazydocker interface. Whether you are a Docker beginner or an experienced user, lazydocker is a valuable tool for managing your Docker environment.

## 11. [act](https://github.com/nektos/act)
Introducing Act, the indispensable command-line tool that empowers you to execute your GitHub Actions locally. Act provides a seamless solution for testing workflows directly on your local machine before deploying them to GitHub. With Act, you can ensure the accuracy and efficiency of your workflows without the need for constant commits and pushes.
Act goes beyond local testing by enabling the execution of workflows on self-hosted runners. This feature proves invaluable when running workflows on machines disconnected from the internet, such as Raspberry Pi devices. With Act, you have the flexibility to effortlessly run your workflows on a variety of environments, improving productivity and expanding the possibilities of your GitHub Actions. Embrace Act and take control of your GitHub Actions workflow development today. Read more about Act on my [blog post](https://dev.to/ken_mwaura1/run-github-actions-on-your-local-machine-bdm).

## Honorable Mentions
- [lazygit](https://github.com/jesseduffield/lazygit) - A simple terminal UI for git commands, written in Go with the gocui library.
- [yay](https://github.com/Jguer/yay) is a robust and user-friendly AUR (Arch User Repository) helper for Arch Linux and Arch-based distributions written in Go.
- [ytop](https://github.com/cjbassi/ytop) - A TUI system monitor written in Rust.
- [bottom](https://github.com/ClementTsang/bottom) - A cross-platform graphical process/system monitor with a customizable interface and a multitude of features.
- [Neovim](https://github.com/neovim/neovim): Modern version of the Vim text editor.
- [taskwarrior](https://github.com/GothenburgBitFactory/taskwarrior): Feature-rich command-line task manager.
- [tmux](https://github.com/tmux/tmux): Terminal multiplexer for managing multiple terminal sessions.
 | ken_mwaura1 |
1,635,477 | sync Windows local folder với aws s3 | Đồng bộ Windows Local folder với aws s3 Tiền điều kiện đã biết dùng S3 CLI (với Access Key)... | 0 | 2023-10-15T16:44:51 | https://dev.to/longtth/sync-windows-local-folder-voi-aws-s3-4m9i | tips | Đồng bộ Windows Local folder với aws s3
Tiền điều kiện
đã biết dùng S3 CLI (với Access Key)
1. cài [aws cli ](https://aws.amazon.com/cli/)
2. tạo 1 file `sync-my-folder.bat`
trong nội dung file bat đấy gõ vào lệnh
```
aws s3 sync s3://mybucket D:\folder-can-sync
```
3. bật Task Scheduler (nhấn nút `Windows` , gõ "task scheduler")
bên Triggers thì set thời gian thực thi,
bên Actions thì trỏ tới cái file `sync-my-folder.bat` ở trên

Enjoy
cám ơn [bài gốc ](https://medium.com/@dangaldeependra/synchronizing-between-local-windows-drive-and-aws-s3-adfed52dba66)
| longtth |
1,636,205 | 5 Tips for Managing Small Business Finance | Starting and running a small business is a challenging endeavor, and one of the most critical aspects... | 0 | 2023-10-16T12:28:20 | https://dev.to/sanya3245/5-tips-for-managing-small-business-finance-1fnk | finance | Starting and running a small business is a challenging endeavor, and one of the most critical aspects of success is managing your finances effectively. Small business finance management can be the difference between growth and stagnation. In this blog post, we'll share five essential tips to help you navigate the financial landscape of your small business and ensure its long-term success.
**Create a Detailed Budget**
Effective budgeting is the foundation of financial management for any small business. Start by creating a detailed budget that outlines your expected income and expenses. Be sure to consider all the costs associated with running your business, from rent and utilities to inventory and employee salaries. Regularly review and update your budget to ensure it remains accurate and relevant.
**Monitor Cash Flow**
Cash flow is the lifeblood of your small business. To manage it effectively, keep a close eye on your income and expenses. Set up a system for tracking payments, invoices, and bills. Consider using accounting software or hiring a professional accountant to help you maintain a clear and up-to-date view of your cash flow.
**Separate Business and Personal Finances**
It's essential to maintain a clear distinction between your personal and business finances. Open a dedicated business bank account and credit card to track your business transactions separately. This not only simplifies record-keeping but also helps protect your personal assets in case of legal or financial issues.
**Save for Emergencies and Investments**
Even a small business should have an emergency fund. Set aside a portion of your profits into a business savings account. This fund can serve as a financial safety net to cover unexpected expenses, such as equipment repairs or sudden dips in revenue. Additionally, consider saving for future investments that can help your business grow, such as expansion, marketing campaigns, or new product development.
**Regularly Review and Adjust**
Small business finance management is an ongoing process. Regularly review your financial statements, budgets, and cash flow projections. This will help you identify trends, spot areas for improvement, and make necessary adjustments to your financial strategy. Staying proactive in financial management is key to long-term success.
Managing your small business's finances can be a daunting task, but it's crucial for sustainability and growth. By creating a detailed budget, monitoring cash flow, separating personal and business finances, saving for emergencies and investments, and regularly reviewing and adjusting your financial strategy, you'll be better equipped to navigate the financial challenges that come with running a small business. These tips will not only help you maintain financial stability but also position your business for future success.
If you ever find yourself needing expert guidance in finance and accounting, consider partnering with a [finance and accounting outsourcing service provider](https://www.invensis.net/finance-accounting-services). They can offer specialized knowledge and support to ensure your business's financial health and compliance, allowing you to focus on what you do best – growing your business. | sanya3245 |
1,635,550 | Get involved: Your guide to contributing to WebCrumbs | Hey there, champ! So, you're itching to dive into the WebCrumbs community, eh? Fantastic! You're... | 0 | 2023-10-15T22:15:00 | https://dev.to/buildwebcrumbs/get-involved-your-guide-to-contributing-to-webcrumbs-30p7 | hacktoberfest, hacktoberfest23, opensource, beginners | Hey there, champ! So, you're itching to dive into the WebCrumbs community, eh? Fantastic! You're about to join a legion of coders hell-bent on making React development as smooth as silk. Here's how you can get your boots on the ground.
## First step: the lay of the land
Start by taking a tour of the [WebCrumbs GitHub repository](https://github.com/webcrumbs-community/webcrumbs). It's the control center of our open-source mission. Check out the `README` for a general overview and the `CONTRIBUTING` file for the nitty-gritty.
## Fork it, clone it, branch it
Alright, you know the drill. Fork the repo, clone it locally, and create a new branch. This way, you're all set to work your magic without stepping on any toes.
## Pick your battle
Whether you're a frontend maestro, a backend virtuoso, or a doc-wizard, there's room for you. Go through the open issues, pick one that resonates, and stake your claim.
## Code like a rockstar
Write clean, comment generously, and stick to the style guide. WebCrumbs is all about quality, so make every line of code count.
## The PR moment of truth
Submit a pull request, and wait for the review. Don't sweat it—feedback is how we grow. Once your PR gets the green light, you're officially part of WebCrumbs history.
## Join the conversation
Not just a coder? Fabulous! Join our Discord, engage in forums, write blog posts, or share your WebCrumbs success stories. The more the merrier!
## The icing on the cake
As you contribute, you're not just accumulating GitHub stars. You're building relationships, honing your skills, and, of course, getting those sweet, sweet open-source karma points.
### Ready to roll up your sleeves?
Jump over to the [WebCrumbs GitHub repository](https://github.com/webcrumbs-community/webcrumbs) and start your journey. Contribute code, ideas, or even a morale-boosting GIF. Let's make React a cakewalk, together.
{% embed https://github.com/webcrumbs-community/webcrumbs %} | opensourcee |
1,635,797 | how to fix this issue on my own | Uncaught Reference Error: Submit is not defined shown on my browser inspect let c1 =... | 0 | 2023-10-16T04:26:45 | https://dev.to/rbalaji150720/uncaught-reference-error-submit-is-not-defined-shown-on-my-browser-inspect-2n8k | > **_Uncaught Reference Error: Submit is not defined shown on my browser inspect_**
```
let c1 = document.getElementById('c1')
let c2 = document.getElementById('c2')
let Bd1 = document.getElementById('Bd')
let index=2;
Bd.addEventListener(Submit,(d)=>{
d.preventDefault();
let num1=parseInt(c1.value);
let num2=parseInt(c2.value);
let c3 = num1 === num2;
alert(c3?"serial number is matched" :"number is unmatched");
console.log(c3?"serial number is matched": "number is unmatched");
});
```
| rbalaji150720 | |
1,635,913 | Perl Weekly #638 - Dancing Perl? | Originally published at Perl Weekly 638 Hi there, What a great and pleasant surprise announcement... | 20,640 | 2023-10-16T07:33:57 | https://perlweekly.com/archive/638.html | perl, news, programming | ---
title: Perl Weekly #638 - Dancing Perl?
published: true
description:
tags: perl, news, programming
canonical_url: https://perlweekly.com/archive/638.html
series: perl-weekly
---
Originally published at [Perl Weekly 638](https://perlweekly.com/archive/638.html)
Hi there,
What a great and pleasant surprise announcement of the release of <a href="https://blogs.perl.org/users/jason_a_crome/2023/10/announcing-dancer2-100.html">Dancer2 1.0.0</a> by the <strong>Dancer Core Team</strong> !!!
I am a big fan of <strong>Dancer2</strong>, so I follow closely the development of my favourite web framework. If you remember there was another announcement recently about change in the core team as <a href="https://blogs.perl.org/users/jason_a_crome/2023/09/announcing-dancer-core-team-changes.html">Ruth Holloway</a> joining the core team. I am very happy that it is in safe hand and actively developed/maintained by the core team.
The <strong>Hacktoberfest 2023</strong> is still going on. Have you done the minimum pull request required to complete the challenge? If not then you still have time to do. Please make sure you submit your pull request before the deadline i.e. last day of the month. In the year <strong>2019</strong>, I managed to submit <strong>160 pull requests</strong> in one month. Unfortunately for the last couple of years, I haven't participated in the event because of other commitments.
Do you want to play with <strong>AI</strong>? Luckily we have <a href="https://chatgpu.ai/pages/navi-ai">Navi AI</a>. I strongly recommend, you give it a go.
Last but not the least, let us all pray for the peace in both sides of the conflicts. I don't watch news these days as they are full of heart breaking news.
Enjoy the rest of the newsletter.
--
Your editor: Mohammad S. Anwar.
## Announcements
### [Announcing Dancer2 1.0.0](https://blogs.perl.org/users/jason_a_crome/)
Latest release of Dancer2 now following semantic versioning. Jason shared the details of latest release.
### [This week in PSC (120)](https://blogs.perl.org/users/psc/2023/10/this-week-in-psc-120.html)
PSC shared the topics last discussed. Thank you PSC.
---
## Articles
### [Bitcoin::Crypto version 2.000 released!](https://bbrtj.eu/blog/article/bitcoin-crypto-2-released)
Are you Bitcoin fan? Then this is for you, do checkout.
### [pushover.net](https://github.com/hightowe/pushover-notify)
A simple command-line Pushover.net notifier script, written in Perl
---
## The Weekly Challenge
<a href="https://theweeklychallenge.org/">The Weekly Challenge</a> by <a href="http://www.manwar.org/">Mohammad Anwar</a> will help you step out of your comfort-zone. You can even win prize money of $50 Amazon voucher by participating in the weekly challenge. We pick one winner at the end of the month from among all of the contributors during the month. The monthly prize is kindly sponsored by Peter Sergeant of <a href="https://perl.careers/">PerlCareers</a>.
### [The Weekly Challenge - 239](https://theweeklychallenge.org/blog/perl-weekly-challenge-239)
Welcome to a new week with a couple of fun tasks: "Same String" and "Consistent Strings". If you are new to the weekly challenge, why not join us and have fun every week? For more information, please read the <a href="https://theweeklychallenge.org/faq">FAQ</a>.
### [RECAP - The Weekly Challenge - 238](https://theweeklychallenge.org/blog/recap-challenge-238)
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Running Sum" and "Persistence Sort" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
### [Meet The Champion - Robbie Hatley](https://theweeklychallenge.org/blog/meet-the-champion-2023-09/)
Please checkout the interview with the latest champion of The Weekly Challenge.
### [TWC238](https://deadmarshal.blogspot.com/2023/10/twc238.html)
CPAN is always handy when it comes for regular use. The end result is compact and concise.
### [Persistently Running](https://raku-musings.com/persistently-running.html)
The cool use of map in Raku, although gather/take could do the job. Plenty of choices available for you in Raku.
### [Running Sum, or "I ♥️ Recursion”"](https://variousandsundry.com/running-sum-or-i-love-recursion/)
For first time, I would say this is worth checking. Lots of interesting thing for you to take a look.
### [Persistence Sort, In Which I Give Up and Use a Global](https://variousandsundry.com/persistence-sort-in-which-i-give-up-and-use-a-global/)
Recusion seems to be more apt for this task as suggested in the blog post. Well structured post, I must admit.
### [Running and Persistence](https://dev.to/boblied/pwc-238-running-and-persistence-4kla)
What an inspiration story to begin the blog post, thanks for sharing.
### [You Can't Touch This!](https://jacoby.github.io/2023/10/09/you-cant.html)
Interesting fact about 238, very impressive. Thanks for sharing the knowledge with us.
### [Perl Weekly Challenge: Week 238](https://www.braincells.com/perl/2023/10/perl_weekly_challenge_week_238.html)
Raku once again showing off the cool features. Please checkout the elegant one-liner.
### [Perl Weekly Challenge 238: Running Sum](https://blogs.perl.org/users/laurent_r/2023/10/perl-weekly-challenge-238-running-sum.html)
Use of reduction meta-operator in Raku and smilar a;ternate in Perl. Highly recommended.
### [Perl Weekly Challenge 238: Persistence Sort](https://blogs.perl.org/users/laurent_r/2023/10/perl-weekly-challenge-238-persistence-sort.html)
Having Raku and Perl solutions side-by-side is fun and easy way to learn the tricks. Thanks for sharing.
### [THE WEEKLY CHALLENGE - 238](https://egroup.kolouch.org/nextcloud/sites/lubos/2023-10-09_Weekly_challenge_238)
Perl and Raku can be so similar doing the same task. Even Python is not far. Keep sharing the knowledge every week.
### [running sums and multiplications](https://fluca1978.github.io/2023/10/09/PerlWeeklyChallenge238.html)
What a surprise, we got Python for the first time. No wonder, it is the most popular choice after Perl and Raku.
### [Master of one-liner in Perl is once again sharing another masterpiece. Keep it up great work.](https://wlmb.github.io/2023/10/09/PWC238/)
### [Reduced Arrays, Reduced Numbers, Reduced Code](https://github.com/manwar/perlweeklychallenge-club/tree/master/challenge-238/matthias-muth#readme)
Nice to know about two cool functions 'reduce' and 'reductions' from List::Utils. Thanks for sharing.
### [Be Runnin’ Up That Sum, Be Persisten’ Up That Sort](https://packy.dardan.com/2023/10/09/be-runnin-up-that-sum-be-persisten-up-that-sort/)
You are presented with different ways of dealing a task. Highly recommended.
### [Running persistence](http://ccgi.campbellsmiths.force9.co.uk/challenge/238)
Simple yet elegant solution with cool interface to try it yourself. Thank you.
### [The Weekly Challenge #238](https://hatley-software.blogspot.com/2023/10/robbie-hatleys-solutions-to-weekly_9.html)
Short and precise description of Perl solutions by the latest champion.
### [Running Persistence](https://blog.firedrake.org/archive/2023/10/The_Weekly_Challenge_238__Running_Persistence.html)
Variable free solution in PostScript? Well it is new to me. Thanks for sharing the knowledge.
### [Counting and sorting](https://dev.to/simongreennet/counting-and-sorting-4136)
Use of lambda in Python is fun. Even the Perl magic is worth checking.
---
## Rakudo
### [2023.41 Free In Three](https://rakudoweekly.blog/2023/10/09/2023-41-free-in-three/)
---
## Weekly collections
### [NICEPERL's lists](http://niceperl.blogspot.com/)
<a href="https://niceperl.blogspot.com/2023/10/cdlxv-9-great-cpan-modules-released.html">Great CPAN modules released last week</a>.
---
## <a href="https://perl.careers/?utm_source=perlweekly&utm_campaign=perlweekly&utm_medium=perlweekly">Perl Jobs by Perl Careers</a>
### [Modern Perl and positive team vibes. UK Remote Perl role](https://job.perl.careers/4tr)
If you’re a Modern Perl developer in the UK with TypeScript or Node and you’re searching for a team of dynamos, we’ve found the perfect place for you. This award-winning company may be newer, but the combined experience of their people is impressive. No doubt this is one of the many reasons their AI recruitment marketing business has taken off!
### [UK Remote Perl Programmer for Leading Enterprise Tech Publication](https://job.perl.careers/od3)
Our client is a global leader in the enterprise technology publishing industry, providing audiences worldwide with stimulating perspectives and unique news on enterprise tech that matters today and tomorrow. They are seeking a talented Perl programmer to manage the full life-cycle of software projects on a remote basis. The ideal candidate must be UK-based and have experience writing high-quality
### [Perl Programmer Opportunity - Join a Prominent Tech Publishing Powerhouse in the Philippines](https://job.perl.careers/1q4)
Our UK-based client is a global leader in the enterprise technology publishing industry, providing audiences worldwide with stimulating perspectives and unique news on enterprise tech that matters today and tomorrow. They are currently seeking a passionate and exceptional Perl programmer based in the Philippines to join their team.
### [Adventure! Senior Perl roles in Malaysia, Dubai and Malta](https://job.perl.careers/r2r)
Clever folks know that if you’re lucky, you can earn a living and have an adventure at the same time. Enter our international client: online trading is their game, and they’re looking for Perl people with passion, drive, and an appreciation for new experiences.
---
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the [archives](https://perlweekly.com/archive/) of all the issues.
Not yet subscribed to the newsletter? [Join us free of charge](https://perlweekly.com/subscribe.html)!
(C) Copyright [Gabor Szabo](https://szabgab.com/)
The articles are copyright the respective authors.
| szabgab |
1,636,025 | Data Modeling. | Over the years, many businesses have been cautious about decision making processes that affect them.... | 0 | 2023-10-23T08:34:48 | https://dev.to/philemonkiplangat/data-modeling-2iio | Over the years, many businesses have been cautious about decision making processes that affect them. This is important since decisions made by a business determines its success. The part of the decision is forecasting which can be made possible by studying the growth of the business. The oil for decision making has been data. It is through it where one can obtain insights about the business and the growth pattern. Decision making process is greatly characterized by the data modelling process.
**Data Modelling** refers to the process of analyzing and define all the data types your business collects and produce,as well as the relationship between those bits of data. This can be achieved by using different tools in the tech field. Modelling can be represented using text, symbols and diagrams since it represents how the data is captured, stored and used.
## Data Modeling Process
This refer to the process of creating conceptual representation of objects and their relationship to one another.The process of data modeling typically involves specific defined steps which are;
- Requirements gathering.
- Conceptual design.
- Logic design.
- Implementation.
During each step the data modelers work with stakeholders to understand the data requirements, define the entities and attributes, establish the relationships between the data objects, and create a model that accurately represents the data in a way that can be used by the stakeholders.
### Levels of abstraction.
- Conceptual model- collaborate with stakeholders to understand the data requirements, identify the entities and attributes, build the links between the data objects, and develop a model that accurately represents the data in a usable format.
- logical level-The logical level involves defining the relationships and constraints between the data objects in more detail, often using data modeling languages such as SQL or ER diagrams.
- Physical level-The physical level involves defining the specific details of how the data will be stored, including data types, indexes, and other technical details.
## Data Modeling Examples
The best way to picture a data model is to think about a building plan of an architect. An architectural building plan assists in putting up all subsequent conceptual models, and so does a data model.
- **Entity Relationship Model**:This model is based on the notion of real-world entities and relationships among them. It creates an entity set, relationship set, general attributes, and constraints.
- **Hierarchical Model**:This data model arranges the data in the form of a tree with one root, to which other data is connected. The hierarchy begins with the root and extends like a tree. This model effectively explains several real-time relationships with a single one-to-many relationship between two different kinds of data.
- **Network Model**:This database model enables many-to-many relationships among the connected nodes. The data is arranged in a graph-like structure, and here ‘child’ nodes can have multiple ‘parent’ nodes. The parent nodes are known as owners, and the child nodes are called members.
- **Relational Model**:This popular data model example arranges the data into tables. The tables have columns and rows, each cataloging an attribute present in the entity. It makes relationships between data points easy to identify.
- **Object-Relational Mode**:This model is a hybrid of an object-oriented database and a relational database. As a result, it combines the extensive functionality of the object-oriented paradigm with the simplicity of the relational data model.
## Benefits Of Data Modeling.
- Allows the developers and the stakeholders to understand the relationship between different objects for easier analysis.
- Improved data quality: Data modeling can help to identify errors and inconsistencies in the data, which can improve the overall quality of the data and prevent problems later on.
- Improved collaboration: Data modeling helps to facilitate communication and collaboration among stakeholders, which can lead to more effective decision-making and better outcomes.
- Increased efficiency: Data modeling can help to streamline the development process by providing a clear and consistent representation of the data that can be used by developers, database administrators, and other stakeholders.
| philemonkiplangat | |
1,636,077 | Scraping AliExpress with Python | This article was originally posted on Crawlbase Blog. In the expansive world of e-commerce data... | 0 | 2023-10-16T10:58:09 | https://dev.to/crawlbase/scraping-aliexpress-with-python-2gb3 | python, webdev, dataengineering, data |
This article was originally posted on [Crawlbase Blog](https://crawlbase.com/blog/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution).
In the expansive world of e-commerce data retrieval, Scraping AliExpress with Python stands out as a vital guide for seasoned and novice data enthusiasts. This guide gently walks you through the step-by-step tutorial of scraping AliExpress using [Crawlbase Crawling API](https://crawlbase.com/docs/crawling-api/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Crawlbase Crawling API').
<!-- more -->
[Click here](#Setting-Up-Your-Environment) to jump right in the first step in case you want to skip the introduction.
## Getting Started
Now that you're here, let's roll up our sleeves and get into the nitty-gritty of web scraping AliExpress using the [Crawlbase Crawling API](https://crawlbase.com/docs/crawling-api/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Crawlbase Crawling API') with Python. But first, let's break down the core elements you need to grasp before we dive into the technical details.
### Brief overview of Web Scraping
In a world where information reigns supreme, [web scraping](https://crawlbase.com/blog/web-scraping-the-comprehensive-guide/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Web Scraping') is the art and science of extracting data from websites. It's a digital detective skill that allows you to access, collect, and organize data from the vast and ever-evolving landscape of the internet.
Think of web scraping as a bridge between you and a treasure trove of information online. Whether you're a business strategist, a data analyst, a market researcher, or just someone with a thirst for data-driven insights, web scraping is your key to unlocking the wealth of data that resides on the web. From product prices and reviews to market trends and competitor strategies, web scraping empowers you to access the invaluable data hidden within the labyrinth of web pages.
### Importance of Scraping AliExpress

Scraping AliExpress with Python has become a pivotal strategy for data enthusiasts and e-commerce analysts worldwide. AliExpress, an online retail platform under the Alibaba Group, is not just a shopping hub but a treasure trove of data waiting to be explored. With millions of products, numerous sellers, and a global customer base, AliExpress provides a vast dataset for those seeking a competitive edge in e-commerce.
By scraping AliExpress with Python, you can effectively scour the platform for product information, pricing trends, seller behaviors, and customer reviews, thereby unlocking invaluable insights into the ever-changing landscape of online retail. Imagine the strategic benefits of having access to real-time data on product prices, trends, and customer reviews. Envision staying ahead of your competition by continuously monitoring market dynamics, tracking the latest product releases, and optimizing your pricing strategy based on solid, data-backed decisions.
When you utilize web scraping techniques, especially with powerful tools like the Crawlbase Crawling API, you enhance your data-gathering capabilities, making it a formidable weapon in your e-commerce data arsenal.
### Introduction to the Crawlbase Crawling API
Our key ally in this web scraping endeavor is the [Crawlbase Crawling API](https://crawlbase.com/docs/crawling-api/scrapers/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Crawlbase Crawling API'). This robust tool is your ticket to navigating the complex world of web scraping, especially when dealing with colossal platforms like AliExpress. One of its standout features is IP rotation, which is akin to changing your identity in the digital realm. Picture it as donning various disguises while navigating a crowded street; it ensures AliExpress sees you as a regular user, significantly lowering the risk of being flagged as a scraper. This guarantees a smooth and uninterrupted data extraction process.
This API's built-in scrapers tailored for AliExpress make it even more remarkable. Along with AliExpress scraper, Crawling API also provide built-in scrapers for other important websites. You can read about them [here](https://crawlbase.com/docs/crawling-api/scrapers/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Scrapers'). These pre-designed tools simplify the process by efficiently extracting data from AliExpress's search and product pages. For an easy start, Crawlbase gives 1000 free crawling requests. Whether you're a novice in web scraping or a seasoned pro, the Crawlbase Crawling API, with its IP rotation and specialized scrapers, is your secret weapon for extracting data from AliExpress effectively and ethically.
In the upcoming sections, we'll equip you with all the knowledge and tools you need to scrape AliExpress effectively and ethically. You'll set up your environment, understand AliExpress's website structure, and become acquainted with Python, the programming language that will be your ally in this endeavor.
## Setting Up Your Environment
Before we embark on our AliExpress web scraping journey, it's crucial to prepare the right environment. This section will guide you through the essential steps to set up your environment, ensuring you have all the tools needed to successfully scrape AliExpress using the Crawlbase Crawling API.
### Installing Python and Essential Libraries
Python is the programming language of choice for our web scraping adventure. If you don't already have Python installed on your system, follow these steps:
1. **Download Python**: Visit the [Official Python Website](https://www.python.org/downloads/ 'Official Python Website') and download the latest version of Python for your operating system.
2. **Installation**: Run the downloaded Python installer and follow the installation instructions.
3. **Verification**: Open your command prompt or terminal and type python `--version` to verify that Python has been successfully installed. You should see the installed Python version displayed.
Now that you have Python up and running, it's time to install some essential libraries that will help us in our scraping journey. We recommend using pip, Python's package manager, for this purpose. Open your command prompt or terminal and enter the following commands:
```bash
pip install pandas
pip install crawlbase
```
**Pandas**: This is a powerful library for data manipulation and analysis, which will be essential for organizing and processing the data we scrape from AliExpress.
**Crawlbase**: This library will enable us to make requests to the Crawlbase APIs, simplifying the process of scraping data from AliExpress.
### Creating a Virtual Environment (Optional)
Although not mandatory, it's considered good practice to create a virtual environment for your project. This step ensures that your project's dependencies are isolated, reducing the risk of conflicts with other Python projects.
To create a virtual environment, follow these steps:
1. **Install Virtualenv**: If you don't have Virtualenv installed, you can install it using pip:
```bash
pip install virtualenv
```
2. **Create a Virtual Environment**: Navigate to your project directory in the command prompt or terminal and run the following command to create a virtual environment named 'env' (you can replace 'env' with your preferred name):
```bash
virtualenv env
```
3. **Activate the Virtual Environment**: Depending on your operating system, use one of the following commands to activate the virtual environment:
- **For Windows**:
```bash
.\env\Scripts\activate
```
- **For macOS and Linux**:
```bash
source env/bin/activate
```
You'll know the virtual environment is active when you see the environment name in your command prompt or terminal.
### Obtaining a Crawlbase API Token
We will utilize the Crawlbase Crawling API to efficiently gather data from various websites. This API streamlines the entire process of sending [HTTP requests](https://crawlbase.com/blog/parallel-http-requests/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'HTTP requests') to websites, seamlessly handles IP rotation, and effectively tackles common web challenges such as CAPTCHAs. Here's the step-by-step guide to obtaining your Crawlbase API token:
1. **Head to the Crawlbase Website**: Begin by opening your web browser and navigating to the official [Crawlbase](https://crawlbase.com/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Crawlbase') website.
2. **Sign Up or Log In**: Depending on your status, you'll either need to create a new Crawlbase account or log in to your existing one.
3. **Retrieve Your API Token**: Once you're logged in, locate the documentation section on the website to access your API token. Crawlbase provides two types of tokens: the Normal (TCP) token and the JavaScript (JS) token. The Normal token is suitable for websites with minimal changes, like static sites. However, if the website relies on JavaScript for functionality or if crucial data is generated via JavaScript on the user's side, the JavaScript token is essential. For example, when scraping data from dynamic websites like AliExpress, the Normal token is your go-to choice. You can get your API token [here](https://crawlbase.com/docs/crawling-api/#authentication/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'API Token').
4. **Safeguard Your API Token**: Your API token is a valuable asset, so it's crucial to keep it secure. Avoid sharing it publicly, and refrain from committing it to version control systems like Git. This API token will be an integral part of your Python code, enabling you to access the Crawlbase Crawling API effectively.
With Pandas and the Crawlbase library installed, a Crawlbase API token in hand, and optionally within a virtual environment, you're now equipped with the essential tools to start scraping data from AliExpress using Python. In the following sections, we'll delve deeper into the process and guide you through each step.
## Understanding AliExpress Website Structure
To become proficient in utilizing the Crawlbase Crawling API for AliExpress, it's essential to have a foundational understanding of the website's structure. AliExpress employs a specific layout for its search and product pages. In this section, we will delve into the layout of AliExpress search pages and product pages, setting the stage for utilizing the Crawlbase API's built-in scraping capabilities.
### Layout of AliExpress Search Pages
AliExpress search pages serve as the gateway for discovering products based on your search criteria. These pages consist of several critical components:

- **Search Bar**: The search bar is where users input keywords, product names, or categories to initiate their search.
- Filter Options: AliExpress offers various filters to refine search results precisely. These filters include price ranges, shipping options, product ratings, and more.
- **Product Listings**: Displayed in a grid format, product listings present images, titles, prices, and seller details. Each listing is encapsulated within an HTML container, often denoted by specific classes or identifiers.
- **Pagination**: Due to the extensive product catalog, search results are distributed across multiple pages. Pagination controls, including "Next" and "Previous" buttons, enable users to navigate through result pages.
Understanding the structural composition of AliExpress search pages is crucial for effectively using the Crawlbase API to extract the desired data. In the forthcoming sections, we will explore how to interact programmatically with these page elements, utilizing Crawlbase's scraping capabilities.
### Layout of AliExpress Product Pages
Upon clicking a product listing, users are directed to a dedicated product page. Here, detailed information about a specific product is presented. Key elements found on AliExpress product pages include:

- **Product Title and Description**: These sections contain comprehensive textual data about the product, including its features, specifications, and recommended use. Extracting this information is integral for cataloging and analyzing products.
- **Media Gallery**: AliExpress often includes a multimedia gallery featuring images and, occasionally, videos. These visual aids provide potential buyers with a holistic view of the product.
- **Price and Seller Information**: This segment furnishes essential data regarding the product's price, shipping particulars, seller ratings, and contact details. This information aids users in making informed purchase decisions.
- **Customer Reviews**: Reviews and ratings provided by previous buyers offer valuable insights into the product's quality, functionality, and the reliability of the seller. Gathering and analyzing these reviews can be instrumental for assessing products.
- **Purchase Options**: AliExpress offers users the choice to add the product to their cart for later purchase or initiate an immediate transaction. Extracting this information allows for monitoring product availability and pricing changes.
With a solid grasp of AliExpress's website layout, we are well-prepared to leverage the Crawlbase Crawling API to streamline the data extraction process. The following sections will dive into the practical aspects of utilizing the API for AliExpress data scraping.
## Utilizing the Crawlbase Python Library
Now that we've established a foundation for understanding AliExpress's website structure, let's delve into the practical application of the Crawlbase Python library to streamline the web scraping process. This section will guide you through the steps required to harness the power of the Crawlbase [Crawling API](https://crawlbase.com/crawling-api-avoid-captchas-blocks/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Crawling API') effectively.
### Importing and Initializing the CrawlingAPI Class
To begin, you'll need to import the Crawlbase Python library and initialize the `CrawlingAPI` class. This class acts as your gateway to making HTTP requests to AliExpress and retrieving structured data. Here's a basic example of how to get started:
```python
from crawlbase import CrawlingAPI
# Initialize the Crawlbase API with your API token
api = CrawlingAPI({ 'token': 'YOUR_CRAWLBASE_TOKEN' })
```
Make sure to replace 'YOUR_CRAWLBASE_TOKEN' with your actual Crawlbase API token, which you obtained during the setup process.
### Making HTTP Requests to AliExpress
With the `CrawlingAPI` class instantiated, you can now make HTTP requests to AliExpress. Crawlbase simplifies this process significantly. To scrape data from a specific AliExpress search page, you need to specify the URL of that page. For example:
```python
# Define the URL of the AliExpress search page you want to scrape
aliexpress_search_url = 'https://www.aliexpress.com/wholesale?SearchText=your-search-query-here'
# Make an HTTP GET request to the specified URL
response = api.get(aliexpress_search_url)
```
Crawlbase will handle the HTTP request for you, and the response object will contain the HTML content of the page.
## Managing Parameters and Customizing Responses
When using the Crawlbase Python library, you have the flexibility to customize your requests by including various parameters to tailor the API's behavior to your needs. You can read about them [here](https://crawlbase.com/docs/crawling-api/parameters/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Crawling API parameters'). Some of them which we need are as below.
**Scraper Parameter**
The `scraper` parameter allows you to specify the type of data you want to extract from AliExpress. Crawlbase offers predefined scrapers for common AliExpress page types. You can choose from the following options:
- **`aliexpress-product`**: Use this scraper for AliExpress product pages. It extracts detailed information about a specific product. Here's an example of how to use it:
```python
response = api.get(aliexpress_search_url, {'scraper': 'aliexpress-product'})
```
- **`aliexpress-serp`**: This scraper is designed for AliExpress search results pages. It returns an array of products from the search results. Here's how to use it:
```python
response = api.get(aliexpress_search_url, {'scraper': 'aliexpress-serp'})
```
Please note that the `scraper` parameter is optional. If you don't use it, you will receive the full HTML of the page, giving you the freedom to perform custom scraping. With `scraper` parameter, The response will come back as JSON.
### Format Parameter
The `format` parameter enables you to define the format of the response you receive from the Crawlbase API. You can choose between two formats: `json` or `html`. The default format is `html`. Here's how to specify the format:
```python
response = api.get(aliexpress_search_url, {'format': 'json'})
```
- **HTML Response**: If you select the html response format (which is the default), you will receive the HTML content of the page as the response. The response parameters will be added to the response headers.
```json
Headers:
url: https://www.aliexpress.com/wholesale?SearchText=laptop+accessories
original_status: 200
pc_status: 200
Body:
HTML of the page
```
- **JSON Response**: If you choose the json response format, you will receive a JSON object that you can easily parse. This JSON object contains all the information you need, including response parameters.
```json
{
"original_status": "200",
"pc_status": 200,
"url": "https%3A%2F%2Faliexpress.com%2F/wholesale%3FSearchText%3Dlaptop+accessories",
"body": "HTML of the page"
}
```
These parameters provide you with the flexibility to retrieve data in the format that best suits your web scraping and data processing requirements. Depending on your use case, you can opt for either the JSON response for structured data or the HTML response for more customized scraping.
## Scraping AliExpress Search and Product Pages
In this section, we will delve into the practical aspect of scraping AliExpress using the Crawlbase Crawling API. We'll cover three key aspects: scraping AliExpress search result pages, handling pagination on these result pages, and scraping AliExpress product pages. We will use search query water bottle and scrape the results related to this search query. Below are Python code examples for each of these tasks, along with explanations.
### Scraping AliExpress Search Result Pages
To scrape AliExpress search result pages, we utilize the 'aliexpress-serp' scraper, a built-in scraper specifically designed for extracting product information from search results. The code initializes the Crawlbase Crawling API, sends an HTTP GET request to an AliExpress search URL, specifying the 'aliexpress-serp' scraper, and extracts product data from the JSON response.
```python
from crawlbase import CrawlingAPI
import json
# Initialize the Crawlbase API with your API token
api = CrawlingAPI({ 'token': 'YOUR_CRAWLBASE_TOKEN' })
# Define the URL of the AliExpress search page you want to scrape
aliexpress_search_url = 'https://www.aliexpress.com/wholesale?SearchText=water+bottle'
# Make an HTTP GET request to the specified URL using the 'aliexpress-serp' scraper
response = api.get(aliexpress_search_url, {'scraper': 'aliexpress-serp'})
if response['status_code'] == 200:
# Loading JSON from response body after decoding byte data
response_json = json.loads(response['body'].decode('latin1'))
# Getting Scraper Results
scraper_result = response_json['body']
# Print scraped data
print(json.dumps(scraper_result, indent=2))
```
Example Output:
```json
{
"products": [
{
"title": "Water-Bottle Plastic Travel Leak-Proof Girl Portable Anti-Fall Fruit Bpa-Free Creative",
"price": {
"current": "US $4.99"
},
"url": "https://www.aliexpress.com/item/4000576944298.html?algo_pvid=8d89f35c-7b12-4d10-a1c5-7fddeece5237&algo_expid=8d89f35c-7b12-4d10-a1c5-7fddeece5237-0&btsid=0ab6d70515838441863703561e47cf&ws_ab_test=searchweb0_0,searchweb201602_,searchweb201603_",
"image": "https://ae01.alicdn.com/kf/Hd0fdfd6d7e5f4a63b9383223500f704be/480ml-Creative-Fruit-Plastic-Water-Bottle-BPA-Free-Portable-Leak-Proof-Travel-Drinking-Bottle-for-Kids.jpg_220x220xz.jpg_.webp",
"shippingMessage": "Free Shipping",
"soldCount": 177,
"ratingValue": 5,
"ratingLink": "https://www.aliexpress.com/item/4000576944298.html?algo_pvid=8d89f35c-7b12-4d10-a1c5-7fddeece5237&algo_expid=8d89f35c-7b12-4d10-a1c5-7fddeece5237-0&btsid=0ab6d70515838441863703561e47cf&ws_ab_test=searchweb0_0,searchweb201602_,searchweb201603_#feedback",
"sellerInformation": {
"storeName": "Boxihome Store",
"storeLink": "https://www.aliexpress.com/store/5001468"
}
},
{
"title": "Lemon-Juice Drinking-Bottle Infuser Clear Fruit Plastic Large-Capacity Sports 800ml/600ml",
"price": {
"current": "US $3.17 - 4.49"
},
"url": "https://www.aliexpress.com/item/4000162032645.html?algo_pvid=8d89f35c-7b12-4d10-a1c5-7fddeece5237&algo_expid=8d89f35c-7b12-4d10-a1c5-7fddeece5237-1&btsid=0ab6d70515838441863703561e47cf&ws_ab_test=searchweb0_0,searchweb201602_,searchweb201603_",
"image": "https://ae01.alicdn.com/kf/H688cb15d9cd94fa58692294fa6780b59f/800ml-600ml-Large-Capacity-Sports-Fruit-Lemon-Juice-Drinking-Bottle-Infuser-Clear-Portable-Plastic-Water-Bottle.jpg_220x220xz.jpg_.webp",
"shippingMessage": "Free Shipping",
"soldCount": 1058,
"ratingValue": 4.6,
"ratingLink": "https://www.aliexpress.com/item/4000162032645.html?algo_pvid=8d89f35c-7b12-4d10-a1c5-7fddeece5237&algo_expid=8d89f35c-7b12-4d10-a1c5-7fddeece5237-1&btsid=0ab6d70515838441863703561e47cf&ws_ab_test=searchweb0_0,searchweb201602_,searchweb201603_#feedback",
"sellerInformation": {
"storeName": "Shop5112149 Store",
"storeLink": "https://www.aliexpress.com/store/5112149"
}
},
...
],
"relatedSearches": [
{
"title": "Water+Bottles",
"link": "https://www.aliexpress.com/w/wholesale-Water%252BBottles.html"
},
{
"title": "Water Bottles",
"link": "https://www.aliexpress.com/w/wholesale-Water-Bottles.html"
},
...
],
"relatedCategories": [
{
"title": "Home & Garden",
"link": "https://www.aliexpress.com/w/wholesale-water-bottle.html?CatId=15"
},
{
"title": "Water Bottles",
"link": "https://www.aliexpress.com/w/wholesale-water-bottle.html?CatId=100004985"
},
...
]
}
```
### Handling Pagination on Search Result Pages
To navigate through multiple pages of search results, you can increment the page number in the search URL. This example demonstrates the basic concept of pagination, allowing you to scrape data from subsequent pages.
```python
from crawlbase import CrawlingAPI
import json
# Initialize the Crawlbase API with your API token
api = CrawlingAPI({ 'token': 'YOUR_CRAWLBASE_TOKEN' })
# Define the base URL of the AliExpress search page you want to scrape
base_url = 'https://www.aliexpress.com/wholesale?SearchText=water+bottle&page={}'
# Initialize a list to store all scraped search results
all_scraped_products = []
# Define the number of pages you want to scrape
num_pages_to_scrape = 5
for page_num in range(1, num_pages_to_scrape + 1):
# Construct the URL for the current page
aliexpress_search_url = base_url.format(page_num)
# Make an HTTP GET request to the specified URL using the 'aliexpress-serp' scraper
response = api.get(aliexpress_search_url, {'scraper': 'aliexpress-serp'})
if response['status_code'] == 200:
# Loading JSON from response body after decoding byte data
response_json = json.loads(response['body'].decode('latin1'))
# Getting Scraper Results
scraper_result = response_json['body']
# Add the scraped products from the current page to the list
all_scraped_products.extend(scraper_result['products'])
```
In this code, we construct the search result page URLs for each page by incrementing the page number in the URL. We then loop through the specified number of pages, make requests to each page, extract the products from each search results using the 'aliexpress-serp' scraper, and add them to a list (`all_scraped_products`). This allows you to scrape and consolidate search results from multiple pages efficiently.
### Scraping AliExpress Product Pages
When scraping AliExpress product pages, we use the 'aliexpress-product' scraper, designed for detailed product information extraction. The code initializes the Crawlbase API, sends an HTTP GET request to an AliExpress product page URL, specifying the 'aliexpress-product' scraper, and extracts product data from the JSON response.
```python
from crawlbase import CrawlingAPI
import json
# Initialize the Crawlbase API with your API token
api = CrawlingAPI({ 'token': 'YOUR_CRAWLBASE_TOKEN' })
# Define the URL of an AliExpress product page you want to scrape
aliexpress_product_url = 'https://www.aliexpress.com/item/4000275547643.html'
# Make an HTTP GET request to the specified URL using the 'aliexpress-product' scraper
response = api.get(aliexpress_product_url, {'scraper': 'aliexpress-product'})
if response['status_code'] == 200:
# Loading JSON from response body after decoding byte data
response_json = json.loads(response['body'].decode('latin1'))
# Getting Scraper Results
scraper_result = response_json['body']
# Print scraped data
print(json.dumps(scraper_result, indent=2))
```
Example Output:
```json
{
"title": "Luxury Transparent Matte Case For iphone 11 Pro XS MAX XR X Hybrid Shockproof Silicone Phone Case For iPhone 6 6s 7 8 Plus Cover",
"price": {
"current": "US $3.45",
"original": "US $4.31",
"discount": "-20%"
},
"options": [
{
"name": "Material",
"values": [
"for iphone 6 6S",
"for 6Plus 6SPlus",
...
]
},
{
"name": "Color",
"values": [
"Black",
"Blue",
...
}
],
"url": "https://www.aliexpress.com/item/4000275547643.html",
"mainImage": "https://ae01.alicdn.com/kf/H0913e18b6ff9415e86db047607c6fb9dB/Luxury-Transparent-Matte-Case-For-iphone-11-Pro-XS-MAX-XR-X-Hybrid-Shockproof-Silicone-Phone.jpg",
"images": [
"https://ae01.alicdn.com/kf/H0913e18b6ff9415e86db047607c6fb9dB/Luxury-Transparent-Matte-Case-For-iphone-11-Pro-XS-MAX-XR-X-Hybrid-Shockproof-Silicone-Phone.jpg",
"https://ae01.alicdn.com/kf/H1507016f0a504f35bbf2ec0d5763d14c4/Luxury-Transparent-Matte-Case-For-iphone-11-Pro-XS-MAX-XR-X-Hybrid-Shockproof-Silicone-Phone.jpg",
...
],
"customerReview": {
"average": 4.8,
"reviewsCount": 146
},
"soldCount": 1184,
"availableOffer": "Additional 3% off (2 pieces or more)",
"availableQuantity": 37693,
"wishlistCount": 983,
"sellerInformation": {
"storeName": "YiPai Digital Store",
"storeLink": "https://www.aliexpress.com/store/2056153",
"feedback": "92.9% Positive Feedback",
"followersCount": 462
},
"shippingSummary": {
"shippingPrice": "Shipping: US $0.41",
"destination": "to Austria via China Post Ordinary Small Packet Plus",
"estimatedDelivery": "Estimated Delivery: 25-46 days"
},
"buyerProtection": [
"60-Day Buyer Protection",
"Money back guarantee"
],
"recommendations": [
{
"link": "https://www.aliexpress.com/item/33053895974.html?gps-id=pcDetailBottomMoreThisSeller&scm=1007.13339.146401.0&scm_id=1007.13339.146401.0&scm-url=1007.13339.146401.0&pvid=ae985f4e-3eca-4c9e-a788-1f37bd5ff3e0",
"price": "US $1.55",
"image": "https://ae01.alicdn.com/kf/H604ad80f527c4b119e3bdb1be20b74cal.jpg_220x220q90.jpg_.webp"
},
...
],
"description": {
"detailedImages": [
"https://ae01.alicdn.com/kf/Hccaa2c9bf726484f94792998d93cc802Y.jpg",
"https://ae01.alicdn.com/kf/Hffe2339701634534a2fc4d5e183ff0aee.jpg",
...
],
"relatedProducts": [
{
"title": "Ultra Slim Silicone Case for iphone 7 6 6s 8 X Cover Coque Candy Colors Soft TPU Matte Phone Case for iphone7 8 plus XS MAX XR",
"price": "USD 1.29-1.50",
"link": "https://www.aliexpress.com/item/Ultra-Slim-Silicone-Case-for-iphone-7-6-6s-8-X-Cover-Coque-Candy-Colors-Soft/32772422277.html",
"image": "https://ae01.alicdn.com/kf/H5d0d6ac957ee4f57942ec172a7ed3529v.jpg_120x120.jpg"
},
...
]
},
"storeCategores": [
{
"parentNode": "For iPhone case",
"parentNodeLink": "https://www.aliexpress.com/store/group/For-iPhone-case/2056153_507217422.html",
"childrenNodes": [
{
"childNode": "For iPhone 5 5S SE",
"childNodeLink": "https://www.aliexpress.com/store/group/For-iPhone-5-5S-SE/2056153_507296208.html"
},
...
]
},
...
]
}
```
These code examples provide a step-by-step guide on how to utilize the Crawlbase Crawling API to scrape AliExpress search result pages and product pages. The built-in scrapers simplify the process, ensuring you receive structured data in JSON format, making it easier to handle and process the extracted information. This approach is valuable for various applications, such as price tracking, market analysis, and competitive research on the AliExpress platform.
## Storing Data
After successfully scraping data from AliExpress pages, the next crucial step is storing this valuable information for future analysis and reference. In this section, we will explore two common methods for data storage: saving scraped data in a CSV file and storing it in an SQLite database. These methods allow you to organize and manage your scraped data efficiently.
### Storing Scraped Data in a CSV File
CSV (Comma-Separated Values) is a widely used format for storing tabular data and is particularly useful when Scraping AliExpress with Python. It's a simple and human-readable way to store structured data, making it an excellent choice for saving your scraped AliExpress products data.
We'll extend our previous search page scraping script to include a step for saving some important information from scraped data into a CSV file using the popular Python library, pandas. Here's an updated version of the script:
```python
import pandas as pd
from crawlbase import CrawlingAPI
import json
# Initialize the Crawlbase API with your API token
api = CrawlingAPI({ 'token': 'YOUR_CRAWLBASE_TOKEN' })
# Define the base URL of the AliExpress search page you want to scrape
base_url = 'https://www.aliexpress.com/wholesale?SearchText=water+bottle&page={}'
# Initialize a list to store all scraped products data
scraped_products_data = []
# Define the number of pages you want to scrape
num_pages_to_scrape = 5
for page_num in range(1, num_pages_to_scrape + 1):
# Construct the URL for the current page
aliexpress_search_url = base_url.format(page_num)
# Make an HTTP GET request to the specified URL using the 'aliexpress-serp' scraper
response = api.get(aliexpress_search_url, {'scraper': 'aliexpress-serp'})
if response['status_code'] == 200:
# Loading JSON from response body after decoding byte data
response_json = json.loads(response['body'].decode('latin1'))
# Getting Scraper Results
scraper_result = response_json['body']
# Add the scraped products data from the current page to the list
for product in scraper_result['products']:
data = {
"title": product['title'],
"price": product['price']['current'],
"rating": product['ratingValue']
}
scraped_products_data.push(data)
# Save scraped data as a CSV file
df = pd.DataFrame(scraped_products_data)
df.to_csv('aliexpress_products_data.csv', index=False)
```
In this updated script, we've introduced pandas, a powerful data manipulation and analysis library. After scraping and accumulating the product details in the `scraped_products_data` list, we create a pandas DataFrame from this data. Then, we use the `to_csv` method to save the DataFrame to a CSV file named "aliexpress_products_data.csv" in the current directory. Setting `index=False` ensures that we don't save the DataFrame's index as a separate column in the CSV file.
You can easily work with and analyze your scraped data by employing pandas. This CSV file can be opened in various spreadsheet software or imported into other data analysis tools for further exploration and visualization.
### Storing Scraped Data in an SQLite Database
If you prefer a more structured and query-friendly approach to data storage, SQLite is a lightweight, serverless database engine that can be a great choice. You can create a database table to store your scraped data, allowing for efficient data retrieval and manipulation. Here's how you can modify the search page script to store data in an SQLite database:
```python
import json
import sqlite3
from bs4 import BeautifulSoup
from crawlbase import CrawlingAPI
# Initialize the CrawlingAPI class with your Crawlbase API token
api = CrawlingAPI({'token': 'YOUR_CRAWLBASE_TOKEN'})
# Initialize a list to store all scraped products data
scraped_products_data = []
# Define the number of pages you want to scrape
num_pages_to_scrape = 5
def create_database():
conn = sqlite3.connect('aliexpress_products.db')
cursor = conn.cursor()
cursor.execute('''CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT,
price TEXT,
rating TEXT
)''')
conn.commit()
conn.close()
def save_to_database(data):
conn = sqlite3.connect('aliexpress_products.db')
cursor = conn.cursor()
# Create a list of tuples from the data
data_tuples = [(product['title'], product['price'], product['rating']) for product in data]
# Insert data into the products table
cursor.executemany('''
INSERT INTO products (title, price, rating)
VALUES (?, ?, ?)
''', data_tuples)
conn.commit()
conn.close()
for page_num in range(1, num_pages_to_scrape + 1):
# Construct the URL for the current page
aliexpress_search_url = base_url.format(page_num)
# Make an HTTP GET request to the specified URL using the 'aliexpress-serp' scraper
response = api.get(aliexpress_search_url, {'scraper': 'aliexpress-serp'})
if response['status_code'] == 200:
# Loading JSON from response body after decoding byte data
response_json = json.loads(response['body'].decode('latin1'))
# Getting Scraper Results
scraper_result = response_json['body']
# Add the scraped products data from the current page to the list
for product in scraper_result['products']:
data = {
"title": product['title'],
"price": product['price']['current'],
"rating": product['ratingValue']
}
scraped_products_data.push(data)
# Create the database and products table
create_database()
# Insert scraped data into the SQLite database
save_to_database(scraped_products_data)
```
In this updated code, we've added functions for creating the SQLite database and table ( create_database ) and saving the scraped data to the database ( save_to_database ). The create_database function checks if the database and table exist and creates them if they don't. The save_to_database function inserts the scraped data into the 'products' table.
By running this code, you'll store your scraped AliExpress product data in an SQLite database named 'aliexpress_products.db'. You can later retrieve and manipulate this data using SQL queries or access it programmatically in your Python projects.
## Final Words
While we're on the topic of web scraping, if you're curious to dig even deeper and broaden your understanding by exploring data extraction from other e-commerce giants like Walmart, Amazon, I'd recommend checking out the [Crawlbase blog page](https://crawlbase.com/blog/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Crawlbase blog page').
Our comprehensive guides don’t just end here; we offer a wealth of knowledge on scraping a variety of popular e-commerce platforms, ensuring you're well-equipped to tackle the challenges presented by each unique website architecture. Check out [how to scrape Amazon search pages](https://crawlbase.com/blog/scrape-amazon-search-pages-with-crawling-api/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Scraping Amazon Search Pages') and [Guide on Walmart Scraping](https://crawlbase.com/blog/scraping-walmart-developers-roadmap/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'Walmart Scraping').
## Frequently Asked Questions
### Q: What are the advantages of using the Crawlbase Crawling API for web scraping, and how does it differ from other scraping methods?
The Crawlbase Crawling API offers several advantages for web scraping compared to traditional methods. First, it provides IP rotation and user-agent rotation, making it less likely for websites like AliExpress to detect and block scraping activities. Second, it offers built-in scrapers tailored for specific websites, simplifying the data extraction process. Lastly, it provides the flexibility to receive data in both HTML and JSON formats, allowing users to choose the format that best suits their data processing needs. This API streamlines and enhances the web scraping experience, making it a preferred choice for scraping data from AliExpress and other websites.
### Q: Can I use this guide to scrape data from any website, or is it specific to AliExpress?
While the guide primarily focuses on scraping AliExpress using the Crawlbase Crawling API, the fundamental concepts and techniques discussed here are applicable to web scraping in general. You can apply these principles to scrape data from other websites, but keep in mind that each website may have different structures, terms of service, and scraping challenges. Always ensure you have the necessary rights and permissions to scrape data from a specific website.
### Q: How do I avoid getting blocked or flagged as a scraper while web scraping on AliExpress?
To minimize the risk of being blocked, use techniques like [IP rotation](https://crawlbase.com/blog/rotating-ip-address/?utm_source=dev.to&utm_medium=referral&utm_campaign=content_distribution 'IP rotation') and user-agent rotation, which are supported by the Crawlbase Crawling API. These techniques help you mimic human browsing behavior, making it less likely for AliExpress to identify you as a scraper. Additionally, avoid making too many requests in a short period and be respectful of the website's terms of service. Responsible scraping is less likely to result in blocks or disruptions.
### Q: Can I scrape AliExpress product prices and use that data for pricing my own products?
While scraping product prices for market analysis is a common and legitimate use case, it's essential to ensure that you comply with AliExpress's terms of service and any legal regulations regarding data usage. Pricing your own products based on scraped data can be a competitive strategy, but you should verify the accuracy of the data and be prepared for it to change over time. Additionally, consider ethical and legal aspects when using scraped data for business decisions.
| crawlbase |
1,636,239 | The comprehensive guide to Entity Framework Core | Hello everyone, in the previous article, we provided an overview of how to access data from our... | 25,039 | 2023-10-16T16:00:57 | https://dev.to/maurizio8788/the-comprehensive-guide-to-entity-framework-core-l5a | dotnet, beginners, api, csharp | Hello everyone, in the previous article, we provided an overview of how to access data from our database through ADO.NET. Most of the time, we won't use ADO.NET in our applications; instead, we'll use an **ORM (Object Relational Mapper)**, and in .NET Core, the most commonly used one is **Entity Framework Core**.
In this article, we won't be using the AdventureWorks2022 database we used previously. Instead, we will examine an example of a small TodoList. This choice will allow us to address topics like migrations and the Code First approach, which we will discuss in detail later.
Ecco il codice Markdown per creare la lista richiesta, con gli elementi principali in grassetto:
### Table of Contents:
- [**What is an ORM**](#what-is-an-orm)
- [**Database First and Code First Approach**](#database-first-and-code-first-approach)
- [**Database First**](#database-first)
- [**Code First**](#code-first)
- [**The Context Class**](#the-context-class)
- [**Column Mapping**](#column-mapping)
- [**Queries with Entity Framework**](#queries-with-entity-framework)
- [**Retrieve Entities**](#retrieve-entities)
- [**Add a New Entity**](#add-a-new-entity)
- [**Update an Entity**](#update-an-entity)
- [**Delete an Entity**](#delete-an-entity)
### **What is an ORM**
An **ORM (Object Relational Mapper)** is a data access library that enables us to map each table in our databases to a corresponding class. It allows us to map each individual column and its corresponding data type, and it seeks to provide a more fluent way of handling data access through a global configuration. In the case of **Entity Framework Core (EF Core)**, this configuration is represented by the DbContext, which we will delve into further later on.
### **Database First and Code First Approach**
The EF Core team has provided us with two development approaches:
- **Database First**: This approach starts with an existing database schema. It allows you to generate entity classes and a context based on the structure of the database. You work with the database schema as your starting point and generate code from it.
- **Code First**: In contrast, the Code First approach begins with defining your entity classes and their relationships in code. From there, you can generate a database schema based on your code. This approach is particularly useful when you want to work primarily with code and let EF Core create and manage the database schema for you.
#### **Database First**
The **Database First** approach (or DB First) is used when we have an existing database schema (or decide to create the database schema first) and then create entity classes and the database context based on it manually or by using a process called **scaffolding**.
Scaffolding is a reverse engineering technique that allows us to create entity classes and a DbContext based on the schema of a database.
In EF Core, you can perform this operation by installing the NuGet package **Microsoft.EntityFrameworkCore.Design** in addition to the EF Core package of the database provider you are using.
Once these prerequisites are satisfied, you can run the `Scaffold-DbContext` command from the command line, providing it with the connection string of your database like this:
```powershell
Scaffold-DbContext ‘Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=TodoList’ Microsoft.EntityFrameworkCore.SqlServer
```
Alternatively, by including it in our `appsettings.json` file, we can retrieve it using this approach.
```powershell
Scaffold-DbContext 'Name=ConnectionStrings:TodoContext' Microsoft.EntityFrameworkCore.SqlServer
```
Using these two simple commands, all the entities of our tables with their corresponding relationships will be created, and additionally, the DbContext class will be generated.
#### **Code First**
The Code First approach, on the other hand, allows us to create entity tables, their relationships, the mapping between various columns, and optionally, the generation of initial data using what are called "Migrations."
Migrations are a system for updating the database schema and keeping track of ongoing changes to it. In fact, after creating the first migration, you'll see a table named EFMigrationsHistory is created. This system allows us, in case of an error, to revert the changes.
To create a migration, you need to create the DbContext and the entities you wish to create (operations we will delve into later), install the NuGet package **Microsoft.EntityFrameworkCore.Tools**, and if you're using Visual Studio (recommended), you need to run the following command from the Package Manager Console:
```powershell
Add-Migrations <nome della migration>
```
This command will create a 'Migrations' folder within the project, containing the following files:
- **XXXXXXXXXXXXXX_<migration name>.cs**, which contains the instructions for applying the migration.
- **<DbContextName>ModelSnapshot.cs**, which creates a "snapshot" of the current model. It is used to identify changes made during the implementation of the next migration.
If you wish to run the migration and update the schema of your database, you should execute the following command:
```powershell
Update-Database
```
If you want to remove a migration, you can run the following command:
```powershell
Remove-Migration
```
Apart from the instructions just explained, there are two tools that allow us to perform these operations much more easily, which I recommend you explore:
Entity Framework Core Command-Line Tools - [dotnet ef](https://learn.microsoft.com/it-it/ef/core/cli/dotnet)
Ef Core Power Tools - [github repository](https://github.com/ErikEJ/EFCorePowerTools/)
## **The Context Class**
The DbContext class is the primary class that allows us to query our data, manage the database connection, and ensure that the mapping is correct.
To create a DbContext class in EF Core, it is sufficient to make one of our classes inherit from the DbContext class, like this:
```csharp
public class TodosContext : DbContext
{
}
```
Of course, you can give your class any name you prefer. However, it's a common convention to name it after the database and add the "Context" suffix. This way, if you have multiple DbContext classes, you can easily distinguish them.
Another small step to follow is to use the default constructor of the DbContext class:
```csharp
public class TodosContext : DbContext
{
public TodosContext(DbContextOptions<TodosContext> options) : base(options) {}
}
```
Now that we've created our DbContext class, we can finally create our entities. In this case, let's simulate a simple TodoList application by creating our two entities: **Todo** and **TodoItem**:
```csharp
// file Todo.cs
public class Todo {
public Todo() {
TodoItems = new HashSet<TodoItem>();
}
public int Id { get;set;}
public string Name { get;set; }
public ICollection<TodoItem> TodoItems { get;set; }
}
// file TodoItem.cs
public class TodoItem {
public int Id { get;set;}
public string Description { get;set; }
public bool IsCompleted { get;set; }
}
```
Once we have created our classes, we can add them to our DbContext as properties of type **DbSet<T>**:
```csharp
public class TodosContext : DbContext
{
public TodosContext(DbContextOptions<TodosContext> options) : base(options) {}
public DbSet<Todo> Todos { get;set; }
public DbSet<TodoItem> TodoItems { get;set; }
}
```
The **DbSet** class in EF Core represents a specific table or view in the database within the database context. It enables CRUD operations, LINQ queries, and provides a change tracking mechanism to simplify data management within the database entities.
### **Column Mapping**
The DbContext has many methods that you can override, and one of the most important ones I'd like to mention is the **OnModelCreating** method. It allows you to perform fluent mapping of your entities using the **modelBuilder** parameter:
```csharp
public class TodosContext : DbContext
{
public TodosContext(DbContextOptions<TodosContext> options) : base(options) {}
...entities classes created
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Todo>( builder => {
builder.HasKey(x => x.Id);
builder.Property(t => t.Id)
.ValueGeneratedOnAdd();
builder.Property(x => x.Name)
.HasColumnName("todo_name");
builder.HasMany(x => x.TodoItems)
.WithOne(t => t.Todo)
.HasForeignKey(t => t.TodoId)
.OnDelete(DeleteBehavior.Cascade);
});
}
}
```
This is what we call **Fluent Mapping** of entity classes. Through the builder, we inform our DbContext about properties, the model, names (if different from the property name in the entity class), whether a property is required, and most importantly, we can map relationships between various entities.
A widely used mapping strategy in EF Core involves Data Annotations, which are attributes used to customize the mapping of entity classes to database tables. Data Annotations allow you to define column attributes and relationships in detail. An example illustrates this practice:
```csharp
[Table("TodosItems")]
class TodoItems {
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int Id { get;set; }
[MaxLength(150)]
[Required]
[Column("item_description")]
public string Description { get;set; }
[Column("item_si_completed")]
public bool IsCompleted { get; set; }
public int TodoId { get; set; }
[ForeignKey(nameof(Todo))]
public Todo? Todo { get; set; }
}
```
In the example above, we use Data Annotations to specify the table name ("TodoItems"), declare the primary key (Id), define the maximum length and requirement of the Description attribute, and specify the column name IsCompleted in the database. This provides detailed control over the mapping between the entity class and the database table.
### **Queries with Entity Framework**
Now that everything is correctly configured, we can finally think about how to execute our first query with EF Core.
First of all, we need to register the DbContext class with the .NET Core dependency injection system and define the database provider we are using. In this case, we are using Sqlite. You can install the NuGet package **Microsoft.EntityFrameworkCore.Sqlite** for this purpose:
```csharp
// file Program.cs
builder.Services.AddDbContext<TodosContext>(options => {
options.UseSqlite(builder.Configuration.GetConnectionString("TodoContext"));
});
```
Let's add these instructions to the `OnModelCreating` method in our database context to have some sample data available immediately as an example:
```csharp
modelBuilder.Entity<Todo>(builder =>
{
// …other properties
builder.HasData(
new Todo
{
Id = 1,
Name = "Project management",
});
});
modelBuilder.Entity<TodoItem>(builder =>
{
builder.HasData(
new TodoItem { Id = 1, Description = "Create a Database Context", IsCompleted = false, TodoId = 1 },
new TodoItem { Id = 2, Description = "Create Todo entity", IsCompleted = false, TodoId = 1 },
new TodoItem { Id = 3, Description = "Create TodoItem entity", IsCompleted = false, TodoId = 1 }
);
});
```
#### **Retrieve Entities**
Finally, let's focus on making our first query:
```csharp
[Route("api/[controller]")]
[ApiController]
public class TodosController : ControllerBase
{
private readonly TodosContext _context;
public TodosController(TodosContext context)
{
_context = context;
}
[HttpGet]
[Route(nameof(GetAllTodos))]
public async Task<ActionResult<List<Todo>>> GetAllTodos()
{
try
{
List<Todo> todos = await
_context.Todos.AsNoTracking().ToListAsync();
return Ok(todos);
}
catch (Exception ex)
{
throw ex;
}
}
}
```
As you can see, we've injected the context class into the controller's constructor and then used it in our endpoint to query the database with LINQ and retrieve the data created earlier:

Operators like **ToList**, **ToListAsync**, **ToArray**, **ToDictionary**, and similar methods are extension methods in EF Core used to actually execute **LINQ** queries and materialize the resulting data from the database. These methods are essential because they allow deferring the execution of queries until the actual results are requested. Additionally, they convert the results into convenient data structures such as lists, dictionaries, or arrays that can be easily used in the application. This approach provides better control over when queries are executed and yields optimized results for further processing.
Another important EF Core extension method that requires explanation is **AsNoTracking()**. This method is used to inform the framework not to track entities retrieved from the database. This choice significantly improves performance when fetching data from a query.
When using AsNoTracking(), the database context will not maintain an internal state of the retrieved entities. This means there will be no tracking of changes made to these entities, making the data retrieval process faster and more efficient.
However, it's important to note that when using AsNoTracking(), you won't have the ability to make direct changes to the retrieved entities and save them to the database without additional steps. Therefore, it's crucial to use this method carefully, reserving it for situations where you only need to retrieve data and not modify entities.
If, on the other hand, we want to retrieve the Todos along with their respective TodoItems, we should:
```csharp
[HttpGet]
[Route(nameof(GetAllTodoWithItems))]
public async Task<ActionResult<List<Todo>>> GetAllTodoWithItems()
{
try
{
List<Todo> todos = await _context.Todos.Include(x => x.TodoItems )
.AsNoTracking()
.ToListAsync();
return Ok(todos);
}
catch (Exception ex)
{
throw ex;
}
}
```
That `Include` statement will perform a left join on the TodoItems table and retrieve, of course if they exist, all the TodoItems for each Todo item.

#### **Add a New Entity**
Inserting a new entity into our database is very simple:
```csharp
[HttpPost]
[Route(nameof(CreateTodo))]
public async Task<ActionResult<Todo>> CreateTodo([FromBody] Todo todo)
{
try
{
_context.Todos.Add(todo);
await _context.SaveChangesAsync();
return CreatedAtRoute(nameof(GetTodo), new { todoId = todo.Id }, todo);
}
catch (Exception ex)
{
throw ex;
}
}
```
It's important to note that the simple **Add** operation won't immediately insert our new entity into the database. Instead, it adds the entity to the **ChangeTracker** of the database context, setting it to an **Add** state. The actual insertion into the database will occur only when we call the `SaveChangesAsync()` method. This operation is crucial for confirming and making the changes permanent in the database. So, remember that `Add` is just the first step, and **SaveChangesAsync()** is what performs the final database insertion action.
#### **Update an Entity**
To update an entity in our database, we can retrieve our **TodoItem** by its ID and then make changes to the object. It's important to ensure that you save the changes to the database using SaveChanges or SaveChangesAsync after making the modifications to the object:
```csharp
[HttpPut("{todoItemId:int}")]
public async Task<ActionResult> UpdateTodoItem(
int todoItemId, [FromBody] TodoItem todoItemUpdated) {
ArgumentNullException.ThrowIfNull(todoItemId, nameof(todoItemId));
try {
TodoItem? todoItemFromDatabase =
await _context.TodoItems.FirstOrDefaultAsync(x => x.Id == todoItemId);
if (todoItemFromDatabase == null)
return NotFound();
todoItemFromDatabase.IsCompleted = todoItemUpdated.IsCompleted;
todoItemFromDatabase.Description = todoItemUpdated.Description;
_context.TodoItems.Update(todoItemFromDatabase);
await _context.SaveChangesAsync();
return NoContent();
} catch (Exception ex) {
throw ex;
}
}
```
In this controller, we retrieve the existing **TodoItem** object from the database based on the provided ID. This is done using `await _context.TodoItems.FirstOrDefaultAsync(x => x.Id == todoItemId);`. Subsequently, the properties of the **todoItemFromDatabase** object are updated with the new data provided. Finally, the **todoItemFromDatabase** object is marked as modified using `_context.TodoItems.Update(todoItemFromDatabase);`, and the changes are applied to the database with `await _context.SaveChangesAsync();`.
#### **Delete an Entity**
To delete a `Todo` item from the database, we can use the `.Remove()` operator. EF Core will automatically handle the deletion of associated `TodoItem` properties belonging to this entity based on the relationship configuration. This simplifies the management of cascading deletions when configured correctly:
```csharp
[HttpDelete("{todoId}")]
public async Task<ActionResult> Delete(int todoId) {
try {
Todo? todo = await _context.Todos.FirstOrDefaultAsync(y => y.Id == todoId);
if (todo is null)
return NotFound();
_context.Todos.Remove(todo);
await _context.SaveChangesAsync();
return NoContent();
} catch (Exception ex) {
throw;
}
}
```
If we now make a call to the endpoint **GetAllTodoWithItems** we will see that the entity and its respective items have been deleted from our database.
Once you have understood how to set up the basic configuration and performed the main **CRUD** operations, you can confidently say that you have covered the fundamental knowledge needed to start working with EF Core. You will find all the code used in this article and the previous one available in this GitHub repository:
{% github Maurizio8788/ArticleAdoDotNetAndEFCore %}
Link to the repo
I hope you've enjoyed this introduction to data processing in .NET, and I hope it proves to be helpful in your studies. If you liked the article, please give it a thumbs up, and I hope you're willing to leave a comment to share knowledge and exchange opinions.
Happy Coding!
| maurizio8788 |
1,636,587 | npm projects | To start an npm project, you can run npm init in the directory where you want to initiate the... | 0 | 2023-10-16T18:56:50 | https://dev.to/itsmohamedyahia/npm-projects-4o7a | npm, node, javascript, webdev | To start an npm project, you can run `npm init` in the directory where you want to initiate the project.
You will be prompted with some prompts about the name of the project and some other stuff that you can just skip by pressing `enter`.
After that a `package.json` file will be created. It will in a `json` format as can be interpretted from the file extension.
JSON is a text-based format for representing JS Objects. JSON stands for Javascript Object Notation.

That is the default content of the file. The two curly braces and what's inside them is a JSON object literal.
#### Why literal or what does literal mean?
Me too have wondered what is the meaning of literal that is shoved between text while the text itself makes sense without it `JSON object` is understandable without the literal in the end.
Also, you might have heard about the literal syntax of declaring objects in javascript or the string literal.
Literal is a notation (way of writing) for representing values. It is the "literal" notation of representing values. And literal here means the same meaning we know it for i.e. literally.
Lets make it more clear.
`let a = "boboddy"`
a is a string.
"boboddy" is also a string.
What is the difference between the two.
One is a variable representing the string, the other is the "literal" way of representing the string.
SO variables and constants cant be string literals or any value literal.
```
let stringVar = String("something")
```
so `String("something")` here is a value right? and it is a string because the string method will return a string as an output (it will get stored to stringVar but we are not talking about the variable here, we are talking about `String("something")` as a value. It is a string but it is not a `string literal` because it is not a literal string notation, the literal string notation is "something")
and the same goes for the other kind of values.
`5` is an integer literal but `2 + 3` is not. Neither is `Number("5")`. They are integers (in the sense that their `2 +3 ` expression will evaluate to `5` and the same goes for the other but they are not integer literals)
You would often hear about the literal syntax of declaring an object. It is like that:
```
let car = {
model: "BMW",
year: 2012
}
```
because those curly braces and the way of writing the key value pairs is the literal syntax.
```
let car = new Object();
car.model = "BMW"
car.year = 2012
```
That is another way of declaring an object but it is not the literal way.
I hope this is clear now.
---
Ok, lets go back. Where were we? Yes, JSON Objects. Ok, so you can see a JSON object with key value pairs.
The `scripts` key contains a value of an object and this object consists of other key-value pairs. the default one we find is "test" but we can add more.
Every key we add to the `scripts` object is a script name and the value would the script itself (the script is a command or series of commands)
The first default script is `test` and when run it runs this line of code `echo \"Error: no test specified\" && exit 1`
So it will echo to the terminal "Error: no test specified" but not only that it will also run another command of `exit 1`
Scripts when run successfully exit with a code status of 0. 0 indicates success. In this case, yes, `echo` did run successfully but the script as a whole is a test script. we didnt run any tests, so it would be logical to assume that the script has failed (technically it didnt yes). In that case, we want to change the exit status to a number that indicates failure. Any number other than zero indicates failure. So they choose 1 as the exit status.
Note that line of code is just a placeholder for the actual test scripts we might have and we would like to replace this line with.
Now, if your project is dependent upon a library like react or a framework, mostly you would find a `dev` key with some command. When you `npm run dev`, the app is running on a local server and you can make a request to the server through your browser in which the server responds with the `index.html` file.
Since we have learned earlier how an npm project is created and how can we manipulate the package.json file, and I told you when the `test` script runs it does blah blah blah but we didnt learn how to run it.
Well, to run it, you open the terminal (VSCode Integrated Terminal) or any terminal program at the project directory and run `npm run ScriptNameInPackage.jsonFileWeWantToRun` so in our case `npm run test`
```
PS D:\Teach\BLOGS\NPM> npm run test
> npm@1.0.0 test
> echo "Error: no test specified" && exit 1
"Error: no test specified"
```
We could also do `npm test` and the script would run but that's for some special script names only like `test` and `start` (If we added a start script, it would run with `npm start` as well as `npm run start`). But for any other custom names we do, we have to run them using `npm run scriptName`
Now, why does a server spin up serving our web app when we do `npm run dev` in most web app projects using a library or framework?
Because the dev key is configured to a script that script will run a program/dependency (called dependency since our app depends on it to run) which will bundle the app and transform the code of the library or the framework into vanilla js along with html,css. This dependency might be vite or webpack or another program.
My focus here is that `npm run dev` has nothing to do with those frameworks/libraries. You could create an empty directory, run `npm init` to initialize (hence init) an npm project, add an html file and add a `dev` key yourself whose value is `live-server`. live server is a program that spins up a local server which will serve your app. You have probably used it as an extension in vscode but you can also install it globally on your system by running `npm i -g live-server`. After you have installed it, you can run `npm run dev` in your directory and voila the html file is open in your browser. You can go back and add some text to the html file to confirm that it is indeed working.
Just one more thing that is intuitive but the file is called package because its main purpose is to keep a list of all the packages (hence package.json) aka dependencies aka programs that your app needs to run.

so in another key called `dependencies` it will store an object of the dependencies and the allowed versions so `^11.11.0` allows running newer minor verions so `11.12.0` is acceptable (middle or second number is the minor version, first is the major and the third i.e 0 in this case is for patch versions)
*Read more about semantic versioning to get a clear idea of how versioning works*
devDependencies is for dependencies that are used in the development environment for the development process but we dont need in production (also shorted prod).
---
And that concludes it. Hope you have enjoyed the read. If it wont bother you, leave a like or comment and share your thoughts.
| itsmohamedyahia |
1,636,618 | A digital marketer finds creative solutions to drive brand awareness and lead generation via free or paid digital channels | I am Farabi Ahmed. I am a Digital Marketer and SEO expert with a strong track record of creating... | 0 | 2023-10-16T19:59:55 | https://dev.to/farabi/a-digital-marketer-finds-creative-solutions-to-drive-brand-awareness-and-lead-generation-via-free-or-paid-digital-channels-1bj4 | digitalmarketer, freelancer, seoexpert, promot | I am Farabi Ahmed. I am a Digital Marketer and SEO expert with a strong track record of creating successful campaigns that drive traffic, generate leads, and increase revenue.
With a deep understanding of online marketing strategies and tactics,
I have developed a unique approach to optimizing websites for search engines,
including Google, Bing, and Yahoo. My ability to identify target audiences,
craft compelling messaging, and develop content
that resonates with customers has allowed me to deliver results for my clients.
I am passionate about digital marketing and SEO and constantly strive
to stay up-to-date with the latest industry trends and best practices
to deliver the highest level of service to my clients.
more https://dev-hyfarabi.pantheonsite.io/ | farabi |
1,636,622 | Detail Explanation This Keyword in Java | In Java, the this keyword is a reference variable that refers to the current object. It is a special... | 0 | 2023-10-16T20:09:25 | https://dev.to/gaurbprajapati/this-keyword-in-java-5cpf | java, programming, coding, tutorial |
In Java, the `this` keyword is a reference variable that refers to the current object. It is a special keyword that has several important uses in object-oriented programming. Here's why and how we use the `this` keyword in Java:
### 1. `this` as a reference variable that refers to the current object:
In real life, think of a person filling out a form. When the form asks for the person's name, the individual might say "my name is John". Here, "my" refers to the current person, similar to how `this` refers to the current object in Java.
```java
class Person {
String name;
public void setName(String name) {
this.name = name; // 'this' refers to the current Person object
}
}
public class Main {
public static void main(String[] args) {
Person person = new Person();
person.setName("John");
System.out.println("Person's name: " + person.name); // Output: Person's name: John
}
}
```
### 2. `this` to refer current class instance variable:
Imagine a car with various attributes. The car's color can be referred to as `this.color`.
```java
class Car {
String color;
public void setColor(String color) {
this.color = color; // 'this.color' refers to the instance variable 'color'
}
}
```
### 3. `this` to invoke current class method:
Consider a class representing a fan. The method `turnOn()` can be invoked using `this.turnOn()`.
```java
class Fan {
public void turnOn() {
System.out.println("Fan is turned on");
}
public void startFan() {
this.turnOn(); // Invoking the current class method using 'this'
}
}
```
### 4. `this()` to invoke the current class constructor:
Think of a class representing a book. You might have multiple constructors for different types of books. Using `this()` allows calling one constructor from another.
```java
class Book {
String title;
String author;
public Book(String title) {
this.title = title;
}
public Book(String title, String author) {
this(title); // Invoking another constructor using 'this()'
this.author = author;
}
}
```
### 5. `this` passed as an argument in the method call:
Consider a class representing a printer. The print job might need information about the printer itself.
```java
class Printer {
String brand;
public void printDocument(Printer printer) {
System.out.println("Printing from " + printer.brand);
}
}
```
### 6. `this` passed as an argument in the constructor call:
Think of a class representing a cell phone. The phone might need information about the network it belongs to.
```java
class CellPhone {
String network;
public CellPhone(String network) {
this.network = network;
}
public void makeCall(CellPhone phone) {
System.out.println("Calling from " + this.network + " to " + phone.network);
}
}
```
### 7. `this` to return the current class instance from the method:
Imagine a class representing a person profile. The method `getProfile()` returns the current person's profile.
```java
class PersonProfile {
String name;
int age;
public PersonProfile(String name, int age) {
this.name = name;
this.age = age;
}
public PersonProfile getProfile() {
return this; // Returning the current class instance using 'this'
}
}
```
Understanding these real-life scenarios and examples should help you grasp how the `this` keyword works in Java's object-oriented programming paradigm. Feel free to experiment with these examples to deepen your understanding further! | gaurbprajapati |
1,636,648 | The Evolution of "Do My Exam Online" Services | The digital era has not only transformed the way we communicate, shop, and entertain ourselves but... | 0 | 2023-10-16T21:10:02 | https://dev.to/carlosstewart1/the-evolution-of-do-my-exam-online-services-33lg | The digital era has not only transformed the way we communicate, shop, and entertain ourselves but has also revolutionized the educational landscape. One of the most significant advancements in this domain is the rise of online examination services [https://domyexams.net/](https://domyexams.net/) . While skeptics once questioned their efficacy, today, they are an integral part of the modern educational framework. Here's an in-depth exploration of the prompt services of service.
**The Genesis of Online Examinations**
As technology started permeating every aspect of our lives, the education sector was no exception. Online courses began to emerge, and with them came the need for a reliable method to assess students remotely. This was the genesis of online exams. Initially fraught with technical glitches and limited in their scope, these exams have now evolved into sophisticated systems that rival, if not surpass, traditional testing methods.
**Advantages of Opting for Online Exams
Time-Efficiency**
One of the most notable benefits is the convenience it offers. Students can take exams from the comfort of their homes, saving commute time and reducing the stress often associated with the traditional examination environment.
**Flexibility**
Online exams often come with the flexibility of choosing a preferred time slot, allowing students to take the test when they feel most prepared and alert.
**Instant Feedback**
Thanks to automated grading systems, students can receive immediate feedback on their performance, enabling them to identify areas of improvement promptly.
Challenges and Solutions in Online Examination Services
While online exams offer numerous advantages, they are not without challenges. However, as the demand for these services grows, solutions are rapidly emerging.
**Technical Glitches**
One of the primary concerns is the potential for technical issues. However, with advancements in technology and IT support systems, platforms have become more stable and user-friendly.
**The Economic Aspect of Services**
Switching to online examinations isn't just an educational decision; it has significant economic implications as well.
**Cost Savings**
Educational institutions can save substantially on logistical and infrastructural expenses. There's no need for physical venues, invigilators, or printed materials. This often translates to reduced exam fees for students.
**Boon for Companies**
The demand for reliable online examination platforms has led to the booming success of companies, which are continually innovating to offer better services.
Looking Towards the Future: The Potential of Online Exams
As with any technological innovation, the realm of online examinations is continuously evolving, promising exciting prospects for the future.
**AI-Integrated Systems**
With the advent of Artificial Intelligence, we can anticipate online exams that offer real-time assistance, advanced anti-cheating mechanisms, and personalized feedback based on a student's performance pattern.
**Global Standardization**
As online exams gain traction globally, there's potential for a standardized examination system accepted across institutions worldwide. This could simplify administrative processes and ensure consistent examination quality.
In conclusion, the rapid advancement and adoption of services signal a paradigm shift in the world of education. While challenges exist, the benefits and potential of these platforms are undeniable. As technology continues to advance and educators and students become more accustomed to this new mode of examination, online exams might just become the new norm, reshaping the future of global education.
The Societal Impact of Online Examination Platforms
Beyond the academic realm, the wave of online examination platforms has had far-reaching societal implications. By breaking the traditional barriers of geography and infrastructure, these platforms are democratizing education in unforeseen ways.
**Bridging the Urban-Rural Divide**
In many parts of the world, students in rural areas often lack the same educational opportunities as their urban counterparts, largely due to infrastructure constraints. With online exams, a student in a remote village can have the same examination experience as someone in a bustling city, provided they have a stable internet connection. This is slowly narrowing the academic gap between urban and rural communities.
**Catering to Differently-Abled Students**
Physical examination centers can pose challenges for differently-abled students. Navigating large campuses, accessing examination halls, and sometimes the sheer anxiety of being in unfamiliar territory can be daunting. Online exams can alleviate many of these challenges, offering a comfortable and familiar environment for these students.
**Reduction in Paper Usage**
Traditional exams often require vast amounts of paper for question sheets, answer sheets, hall tickets, and more. Moving exams online can drastically reduce this paper consumption, leading to fewer trees being cut down for paper production.
**Enhancing Student Preparedness with Mock Tests**
Online examination platforms often offer mock tests for students, enabling them to familiarize themselves with the online testing environment.
**Boosting Confidence**
Practicing with mock tests can significantly boost a student's confidence. They get a clear idea of the examination pattern, the type of questions, and the overall testing environment.
**Time Management**
Mock tests also help students hone their time management skills, a critical aspect of performing well in timed exams.
Personalization: The Next Frontier in Online Examinations
With the integration of advanced algorithms and AI, there's an emerging trend in the personalization of online exams.
**Adaptive Testing**
Some platforms are now offering adaptive testing, where the difficulty level of the exam adjusts based on a student's performance. For instance, if a student answers a series of questions correctly, the test might present more challenging questions, ensuring that the examination is accurately gauging the depth of a student's knowledge.
**Personalized Feedback**
Instead of generic feedback, AI-powered platforms can provide insights tailored to each student. This could include topics they are strong in, areas that need improvement, and even recommendations for further studies or resources.
**Continuous Improvement through Data Analytics**
One of the most significant advantages of online exams is the vast amount of data they generate. By analyzing this data, educators can gain invaluable insights.
**Identifying Patterns**
Data analytics can help in identifying patterns, like common mistakes made by students, topics that are consistently challenging, or even potential flaws in the examination itself.
**Enhancing Curriculum**
With data-driven insights, educational institutions can refine their curriculum, placing emphasis on areas where students typically struggle and ensuring a more rounded and effective educational experience.
In wrapping up, the proliferation of services is not just a testament to technological advancement but is indicative of a broader shift in societal values and priorities. As the world leans more towards inclusivity, sustainability, and efficiency, online examinations perfectly encapsulate these ideals. With continuous innovation and refinement, they are set to reshape the educational landscape for generations to come. | carlosstewart1 | |
1,636,728 | Styling React Components | Personally, for me, I dont like to style. I mean, I dont hate CSS and styling but I hate when I give... | 0 | 2023-10-19T18:21:45 | https://dev.to/balamurugan16/styling-react-components-i2 | ---
title: Styling React Components
published: true
date: 2023-10-16 17:35:27 UTC
tags:
canonical_url:
---
Personally, for me, I dont like to style. I mean, I dont hate CSS and styling but I hate when I give styling a try and It looks bad.

Anyway, I dont have to emphasize the importance of styling, If your users need to stay in your app, then the app should look good. Unlike the dating culture, in web development **Looks do matter!** (Just Kidding, Looks matter in dating as well! 😂).
In this article, I will share 5 different ways to style react components so that you can choose the approach that suits you when you develop your application.
But first of all, why should you know all the different ways to style, you may ask! Isnt knowing one approach enough? Yeah but No. First of all, it is easy to take consistency for granted. Because in a large codebase, developers should be aware of the styling approaches, and should not go with the approach that they are familiar with. So knowing the other approaches exist will keep you ready to tackle any situation.
## 1. The `style` attribute
In react, you either write the inline styles using the `style` attribute.
```
<div
style={{
display: "flex",
justifyContent: "center",
alignItems: "center"
}}>
<h1>Hi Mom!</h1>
</div>
```
As you can see, this approach has a few problems, if the CSS properties count increases, then the component will be bloated with a bunch of CSS. Reusability is not an option and for inline CSS the specificity is also high so If you already have some styles, the inline styles will most probably take precedence!
## 2. CSS or Sass
You can also have your styles in a separate stylesheet and import that into the component. This provides the benefits of using either the CSS or Sasss features like nesting, variables, mixins and so on. I like this approach and I will never be against this one, the only thing is there are more modern approaches that solve the problem better. But if you prefer this approach then you can very well go with this.
## 3. CSS in JS
Initially, I didnt like this approach but when I started to use this, this was a game changer. Essentially there are libraries like styled-components that let us write styles in a normal CSS way and also give additional features like nesting, conditional styling and so on, and produce a react wrapper component out of the styles that we define. This approach is really useful because you can organize your styled react components better. Look at the following example.
```
export const App = () => {
return (
<Wrapper>
<h1>Hi Mom!</h1>
</Wrapper>
);
};
const Wrapper = styled.div`
display: flex;
justify-content: center;
align-items: center
`
```
So here, notice the way I have written the CSS. Yes the API that this library gives is a bit weird but you will get used to it in no time. This is a better way to write CSS directly in a JavaScript file. If you are using VS Code, install the styled-components extension, this gives better syntax highlighting and auto-complete support which will be handy!
## 4. Tailwind CSS
Tailwind is by far the best and most modern approach for styling. Tailwinds approach is to provide utility classes for your known CSS properties like `flex` means `display:flex` and so on. This will let you quickly style your components and not worry about naming the class or id to style your components. Sometimes your class names can become a nightmare but there are ways to come around that as well. Tailwind provides customization as well with the `tailwind.config.js` file where essentially you can configure your typography, colors and so on. Look at the following example.
```
export const App = () => {
return (
<div className="flex justify-center items-center">
<h1>Hi Mom!</h1>
</div>
);
};
```
As you can see, it is intuitive and easy. Developers can quickly style the components. If you are using VS Code install the tailwind extension for auto-complete support. It will be very useful.
## 5. Component Library
The final approach is going with a component library. Component libraries come with pre-styled components reducing our work. There are several libraries out there like MUI, Daisy UI, Chakra UI and so on. Usually, component libraries come with a built-in opinionated way of styling, for example, MUI comes with styled-components for styling and Daisy UI is built on top of Tailwind CSS. This way, you get pre-styled components and a way to add styles on top of them.
To conclude, now you know 5 different ways to style react components. Next time you can pitch the approach to your seniors which will give you a good impression with your colleagues. Trust me, you can pick up girls if you know CSS! (Just kidding, programming nerds can never get girls by exposing that they are programmers!) | balamurugan16 | |
1,636,743 | Day 9: Another Certification Project Completed | Hey guys, I'm happy to share another success story in this field with you guys. The Journey keep... | 0 | 2023-10-16T23:25:22 | https://dev.to/duke09/day-9-another-certification-project-completed-4emc | Hey guys,
I'm happy to share another success story in this field with you guys. The Journey keep getting interesting, that alone gives me joy.
I was tasked to finish another certification project on freecodecamp. The Project wasn't that difficult, just a simple tribute page, few tags with one anchor tag. Tomorrow, another task comes up, I cant wait...
Link: https://github.com/Duke411/Tribute-Page
| duke09 | |
1,636,792 | Entendendo Algoritmos - Segunda Semana | Recapitulando... Na semana passada, tivemos o primeiro contato com algoritmos de ordenação... | 0 | 2023-10-17T13:29:58 | https://dev.to/loremimpsu/entendendo-algoritmos-segunda-semana-ao0 | beginners, programming, hacktoberfest, algorithms |
## Recapitulando...
Na semana passada, tivemos o primeiro contato com algoritmos de ordenação e busca binária(você pode conferir [aqui](https://dev.to/loremimpsu/entendendo-algoritmos-primeira-semana-5fmn)).
## Recursividade

A recursividade é uma ferramenta de programação que utiliza a reutilização do bloco de código para criar um processo de repetição de uma rotina de código. O conceito pode ser complexo, mas é apenas a criação de um laço a partir de um trecho de código e não de uma estrutura.
Caso você queira se aprofundar no assunto de recursão:
- [recursividade](https://carlacastanho.github.io/Material-de-APC/aulas/aula14.html);
Um exemplo clássico de recursividade é a torre de hanoi. Sim, aquele brinquedinho de bebê. A torre de hanoi é o melhor exercício de recursividade que você pode treinar. Se você quer uma ajuda com a torre de hanoi:
- [como resolver a torre de hanoi - freecodecamp](https://www.freecodecamp.org/portuguese/news/como-resolver-o-problema-da-torre-de-hanoi-um-guia-ilustrado-do-algoritmo/)

## Pilha de recursão
Ao trabalhar com recursividade, temos a necessidade de trabalhar com a pilha de recursão. Nós já estudamos pilhas como estrutura de dados no [capítulo anterior](https://dev.to/loremimpsu/entendendo-algoritmos-primeira-semana-5fmn). Hoje vemos uma estrutura semelhante, só que trabalhando com o empilhamento das chamadas de uma função recursiva. Para se aprimorar mais, temos:
- [pilha recursiva](https://www.mundojs.com.br/2019/06/23/recursao-e-a-pilha-stack/)
Um exemplo de utilização ao máximo da pilha de recursividade, temos o caso do fibonacci. Ela se vale da pilha para calcular uma resposta. Vamos dar uma olhada mais afundo no algoritmo de fibonacci:
- [algoritmo de fibonacci](http://devfuria.com.br/logica-de-programacao/recursividade-fibonacci/)

## Algoritmo de divisão e conquista e Quicksort
O algoritmo de divisão e conquista é um algoritmo que trata a divisão de um problema em vários probleminhas de mais fácil resolução. Baseado no algoritmo de euclides, que também é um dos algoritmos matemáticos interessantes para o estudo.
- [algoritmo de euclides](https://pt.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/the-euclidean-algorithm#:~:text=O%20Algoritmo%20Euclidiano%20para%20encontrar,e%20podemos%20parar%20a%20verifica%C3%A7%C3%A3o.)

O algoritmo de divisão e conquista é a base de vários outros algoritmos, incluido o Quicksort, que foi o algoritmo estudado pelo livro. Se você quer saber mais sobre o quicksort, acesse:
-[quicksort](https://joaoarthurbm.github.io/eda/posts/quick-sort/)
Além do Quicksort, há um segundo algoritmo muito importante, o Mergesort. O mergesort é um dos algoritmos mais pedidos em provas técnicas, pois ele te promete um algoritmo de custo baixo **O(nlogn)**, com um bonus de não exceder tanto a memória demandada. O quick é relativamente mais rápido, porém gasta muito com o recurso de memória, onde o merge entra para ajudar. Para saber mais sobre o mergesort, acesse:
-[mergesort](https://joaoarthurbm.github.io/eda/posts/merge-sort/)
Em uma classe diferente de algoritmos de ordenação, temos um algoritmo famoso pelo seu baixo custo, o counting sort. Seu custo é estimado em **O(n + k)** onde **k** seria uma constante calculada através do tamanho da lista. Ou seja, se a lista tem 10 elementos o custo do algoritmo seria O(n + 10). Um ótimo algoritmo, com um crescimento linear, não é? sim, mas como tudo que é bom tem seu preço, o algoritmo countingsort vai cobrar na memória utilizada. Para saber mais sobre este algoritmo, dá uma olhada neste material:
-[countingsort](https://joaoarthurbm.github.io/eda/posts/ordenacao-linear/)
O coutingsort tem variações, que melhoram este problema de memória. O bucketsort e o radixsort. Eles irão trabalhar com estruturas de dados auxiliares que já vimos no [capítulo anterior](https://dev.to/loremimpsu/entendendo-algoritmos-primeira-semana-5fmn), linkedlist e arraylist, para fazer essa otimização. Para saber mais sobre, acessem:
-[bucketsort e radixsort](https://algoritmosempython.com.br/cursos/algoritmos-python/pesquisa-ordenacao/bucket-radix/)
## Tabelas hash
Se você já programa a algum tempo, já deve ter topado com a estrutura de dados Map, ou dictionary, ou simplesmente tabela hash. Mas porque ela é tão importante? A tabela hash nos garante o acesso a um valor salvo e catalogado anteriormente ao preço de O(1). Ou seja, o acesso é instantâneo. Isso se dá pelo método em que traduzimos o valor da chave (key) em um endereço de memória e salvamos o seu conteúdo no valor resultante. Este método de calculo chama hashcode. Se você que saber mais alguns detalhes sobre a tabela e como fazer um hashcode, olha o link:
- [tabela hash](https://joaoarthurbm.github.io/eda/posts/hashtable/)
## Conclusão
A maioria dos assuntos aqui são um incremento ao assunto principal abordado no livro atual do clube do livro. Caso você não esteja acompanhando o livro, mas se interesse sobre os assunto, até o momento temos estes artigos lançados:
- [Entendendo algoritmos - introdução](https://dev.to/loremimpsu/entendendo-algoritmos-introducao-53f0)
- [Entendendo algoritmos - primeira semana](https://dev.to/loremimpsu/entendendo-algoritmos-primeira-semana-5fmn)
Caso queira acompanhar nosso clube na leitura do livro, se sinta convidado a entrar para o discord:
- [Link do convite](https://discord.gg/DWDGjacWyM)
Caso queira adquirir o livro, peça pelo nosso link. O valor adquirido pelo link de patrocinado vai ser revertido a livros para pessoas de baixa visão ou que tenham alguma deficiência para a utilização de meios ... não ortodoxos de leitura.
- [Link do livro - Entendendo algoritmos](https://amzn.to/3tD028z) | loremimpsu |
1,636,800 | How can I prepend a domain name to all images in a rails 7 app using activestorage attachments | How do I prepend a domain name to the path to /images ? | 0 | 2023-10-17T02:08:51 | https://dev.to/mices/how-can-i-prepend-a-domain-name-to-all-images-in-a-rails-7-app-using-activestorage-attachments-4khh | rails | How do I prepend a domain name to the path to /images ? | mices |
1,636,948 | BrightSwipe Inc. Launches Next-Gen Public Safety Solution for Person Verification and Criminal History Checks | BrightSwipe, Inc., a leading provider of digital public and dating safety solutions, is proud to... | 0 | 2023-10-17T06:19:22 | https://dev.to/matrubharti/brightswipe-inc-launches-next-gen-public-safety-solution-for-person-verification-and-criminal-history-checks-1dne | criminalcheck, criminalbackgroundverification, anticatfish | **[ BrightSwipe, ](https://brightswipe.com/)**Inc., a leading provider of digital public and dating safety solutions, is proud to announce the launch of its revolutionary person verification and criminal history checks solution. Available on an iPhone via the web mobile version and in the Google Play Store, BrightSwipe utilizes Next-Gen technology by accessing records in near real-time using proprietary logic to provide users with key knowledge about potential matches and interactions before meeting in-person. This includes verifying identities, checking for criminal histories and detecting potential catfishing activity. BrightSwipe helps users feel confident and safe when interacting with potential matches for dating or other social interactions.
**[BrightSwipe](https://brightswipe.com/)**, Inc., a leading provider of digital public and dating safety solutions.
By aggregating public records and criminal data to provide a comprehensive check in just a few seconds, BrightSwipe provides three unique verification checks that include:
•**[ Anti-catfish Check:](https://brightswipe.com/services/
)** Verify the authenticity of your matches to prove they are who they say they are. This allows BrightSwipe to provide an easy to read check that verifies a person’s name, location, age and marital status.
• Criminal Check: This service provides a report that verifies and shows a match’s criminal history and past arrest records.
• Social Check: Validate someone’s social media accounts to determine if they are possibly a bot or fraudster. This allows visibility into their social media presences, which helps prove their legitimacy.

| matrubharti |
1,637,097 | Create and Deploy a Smart Contract on the NEAR Protocol | This article gives you a comprehensive guide on smart contract development and deployment on the NEAR... | 0 | 2023-10-17T08:05:22 | https://dev.to/oodlesblockchain/create-and-deploy-a-smart-contract-on-the-near-protocol-39je | smartcontract, blockchain, webdev, beginners | This article gives you a comprehensive guide on [smart contract development](https://blockchain.oodles.io/smart-contract-development-services/) and deployment on the NEAR platform.
NEAR Protocol is a decentralised [blockchain app development platform](https://blockchain.oodles.io/) that has been dubbed the “Ethereum killer.” It attempts to make the process of creating dApps simpler and more user-friendly for developers. Users can use its native token, NEAR, to pay for storage and transaction costs on the NEAR platform.
NEAR Protocol is the best example of a third-generation digital ledger. It focuses on solving scalability issues. It rewards the network to develop and deploy dApps in the ecosystem in order to promote decentralised financial.
The two distinct parts of the NEAR Protocol
- Native token named NEAR
- Smart Contracts
They manage the on-chain storage. Modification of data.
Interactions with Smart Contract-(s):-
We are able to communicate with both the contracts we deploy and the contracts that others install.
How to create a Smart Contract on NEAR Protocol
[https://wallet.testnet.near.org/](https://wallet.testnet.near.org) and select the ‘create account’ option. Then, insert the testnet account name that you wish to use
Pre-requisites
Rust toolchain
A NEAR account
NEAR command-line interface (near-cli)
Set up requirements
Using the Rust environment create a NEAR account and then install near-cli.
Install Rust toolchain
Install the Rustup
Install the Rustup:
curl — proto ‘=https’ — tlsv1.2 -sSf https://sh.rustup.rs | sh
Configure the current shell
Insert the following command into the configuration of your current shell:
source $HOME/.cargo/env
Add the wasm target to the blockchain network
rustup target add wasm32-unknown-unknown
Repository Creation
cargo new calculator
cd calculator
Editing Cargo.toml
[package]
name = “rust-counter-tutorial”
version = “0.1.0”
authors = [“NEAR Inc <hello@near.org>”]
edition = “2018”
[lib]
crate-type = [“cdylib”, “rlib”]
[dependencies]
near-sdk = “3.1.0”
[profile.release]
codegen-units = 1
# Tell `rustc` to optimize for small code size.
opt-level = “z”
lto = true
debug = false
panic = “abort”
# Opt into extra safety checks on arithmetic operations [https://stackoverflow.com/questions/64129432/near-and-safe-math-on-unsigned-integers/64136471#64136471](https://stackoverflow.com/a/64136471/249801)
overflow-checks = true
Create Lib.rs
use near_sdk::borsh::{self, BorshDeserialize, BorshSerialize};
use near_sdk::{env, near_bindgen};
near_sdk::setup_alloc!();
#[near_bindgen]
#[derive(Default, BorshDeserialize, BorshSerialize)]
pub struct Calculator {
x: i64,
y: i64,
}
#[near_bindgen]
impl Calculator {
pub fn get_numbers(&self) -> (i64, i64) {
(self.x, self.y)
}
pub fn add_numbers(&mut self) -> i64 {
let add = self.x + self.y;
let log_message = format!(“Addition of numbers {} + {} = {}”, self.x, self.y, add);
env::log(log_message.as_bytes());
add
}
pub fn sub_numbers(&mut self) -> i64 {
let sub = self.x — self.y;
let log_message = format!(“Subtraction of numbers {} — {} = {}”, self.x, self.y, sub);
env::log(log_message.as_bytes());
sub
}
pub fn mul_numbers(&mut self) -> i64 {
let mul = self.x * self.y;
let log_message = format!(
“Multiplication of numbers {} * {} = {}”,
self.x, self.y, mul
);
env::log(log_message.as_bytes());
mul
}
pub fn div_numbers(&mut self) -> i64 {
let div = self.x / self.y;
let log_message = format!(“Division of numbers {} / {} = {}”, self.x, self.y, div);
env::log(log_message.as_bytes());
div
}
}
// use the attribute below for unit tests
#[cfg(test)]
mod tests {
use near_sdk::MockedBlockchain;
use near_sdk::{testing_env, VMContext};
use crate::Calculator;
fn get_context(input: Vec<u8>, is_view: bool) -> VMContext {
VMContext {
current_account_id: “alice.testnet”.to_string(),
signer_account_id: “robert.testnet”.to_string(),
signer_account_pk: vec![0, 1, 2],
predecessor_account_id: “jane.testnet”.to_string(),
input,
block_index: 0,
block_timestamp: 0,
account_balance: 0,
account_locked_balance: 0,
storage_usage: 0,
attached_deposit: 0,
prepaid_gas: 10u64.pow(18),
random_seed: vec![0, 1, 2],
is_view,
output_data_receivers: vec![],
epoch_height: 19,
}
}
// mark individual unit tests with #[test] for them to be registered and fired
#[test]
fn addition() {
// set up the mock context into the testing environment
let context = get_context(vec![], false);
testing_env!(context);
// instantiate a contract variable with the counter at zero
let mut contract = Calculator { x: 20, y: 10 };
let num = contract.add_numbers();
println!(“Value after addition: {}”, num);
// confirm that we received 1 when calling get_num
assert_eq!(30, num);
}
#[test]
fn subtraction() {
// set up the mock context into the testing environment
let context = get_context(vec![], false);
testing_env!(context);
// instantiate a contract variable with the counter at zero
let mut contract = Calculator { x: 20, y: 10 };
let num = contract.sub_numbers();
println!(“Value after subtraction: {}”, num);
// confirm that we received 1 when calling get_num
assert_eq!(10, num);
}
#[test]
fn multiplication() {
// set up the mock context into the testing environment
let context = get_context(vec![], false);
testing_env!(context);
// instantiate a contract variable with the counter at zero
let mut contract = Calculator { x: 20, y: 10 };
let num = contract.mul_numbers();
println!(“Value after multiplication: {}”, num);
// confirm that we received 1 when calling get_num
assert_eq!(200, num);
}
#[test]
fn division() {
// set up the mock context into the testing environment
let context = get_context(vec![], false);
testing_env!(context);
// instantiate a contract variable with the counter at zero
let mut contract = Calculator { x: 20, y: 10 };
let num = contract.div_numbers();
println!(“Value after division: {}”, num);
// confirm that we received 1 when calling get_num
assert_eq!(2, num);
}
}
Test the code
Test the smart contract code via cargo,
cargo test
You will then receive output in the form of:
Compiling rust-counter-tutorial v0.1.0 (/home/yogesh/Blogs/rust-counter)
Finished test [unoptimized + debuginfo] target(s) in 1.57s
Running unittests src/lib.rs (target/debug/deps/rust_counter_tutorial-7e3850288c4d6416)
running 4 tests
test tests::division … ok
test tests::addition … ok
test tests::multiplication … ok
test tests::subtraction … ok
test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests rust-counter-tutorial
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Compile the code
cargo build — target wasm32-unknown-unknown — release
Deploying smart contracts
You will deploy it using the near-cli testnet of your NEAR account.
near login
near deploy — wasmFile target/wasm32-unknown-unknown/release/rust_counter_tutorial.wasm — accountId YOUR_ACCOUNT_HERE
You will then receive output in the form of:
To see the transaction in the transaction explorer, please open this url in your browser
https://explorer.testnet.near.org/transactions/CqEh5tsh9Vo827DkN4EurECjK1ft8pRdkY2hhCYN9W8i
Done deploying to yogeshsha.testnet
If you want more information on how to get started with NEAR blockchain-based dApp development, you may connect with our [Smart Contract Developers](https://blockchain.oodles.io/contact-us/#contact-us).
| oodlesblockchain |
1,637,346 | Striking a Balance: The Rise of StrikeCo Electric Scooters in India | In the bustling streets of India, where urban congestion and air pollution continue to pose... | 0 | 2023-10-17T11:32:00 | https://dev.to/rohitjain/striking-a-balance-the-rise-of-strikeco-electric-scooters-in-india-48d5 | electric, scooter | In the bustling streets of India, where urban congestion and air pollution continue to pose significant challenges, electric scooters have emerged as a game-changing solution. One brand, in particular, is making waves with its innovative approach – StrikeCo Electric Scooters in india.
The Need for Electric Mobility in India
India's population and economy have been growing steadily, leading to an increased demand for personal transportation. However, the conventional internal combustion engine vehicles have exacerbated air pollution and traffic congestion in many cities. This has prompted a shift towards sustainable and eco-friendly alternatives, and electric scooters have emerged as a front-runner.
StrikeCo: A Name You Can Trust
StrikeCo, a leading electric scooter manufacturer, has entered the Indian market with a commitment to provide eco-friendly, energy-efficient, and cost-effective transportation solutions. Their electric scooters are designed to not only reduce the carbon footprint but also offer a convenient and stylish mode of commuting.
Why StrikeCo Electric Scooters Are Making an Impact:
1. Environmental Benefits: StrikeCo electric scooters produce zero emissions, helping to reduce air pollution and combat climate change.
2. Cost Savings: With significantly lower operating costs compared to traditional petrol-powered scooters, StrikeCo offers a more economical and sustainable choice for daily commuting.
3. Convenience: StrikeCo scooters are lightweight, easy to maneuver, and come with modern features like smartphone connectivity and GPS, making them a practical choice for urban commuters.
4. Range and Performance: These scooters are equipped with advanced lithium-ion batteries that offer impressive mileage on a single charge. This ensures that you won't be left stranded due to limited range.
5. Style and Design: StrikeCo scooters are designed to be sleek, modern, and appealing, catering to the aesthetic preferences of the younger, tech-savvy generation.
The Future of Urban Mobility
StrikeCo Electric Scooters in india are becoming more than just a mode of transportation; they are part of a larger shift towards sustainable urban mobility. As India's cities continue to grow and grapple with transportation challenges, electric scooters provide a reliable and eco-conscious solution.
Whether it's for daily commuting, short trips, or last-mile connectivity, StrikeCo [Electric Scooters in india](https://strikeco.co.in/) are poised to revolutionize the way people move in Indian cities. They are making it possible for individuals to make environmentally responsible choices without compromising on convenience, style, or performance.
As India embraces electric mobility, StrikeCo is at the forefront, offering a greener and more efficient way of getting around. With their innovative technology and commitment to sustainability, they are set to play a pivotal role in shaping the future of urban transportation in India.
https://strikeco.co.in/ | rohitjain |
1,637,437 | Deploy Next JS App To Cpanel Using Github Actions | Deployment can be a challenging and time-consuming process if you zip files manually, put the zip on... | 0 | 2023-11-04T17:01:35 | https://dev.to/heyitsuzair/deploy-next-js-app-to-cpanel-using-github-actions-1nl9 | Deployment can be a challenging and time-consuming process if you zip files manually, put the zip on the server, and extract the files each time🥴
Github actions solve this problem by giving us an automated CI/CD platform.
## **Prerequisite**
- FTP account created in cpanel ([Check this link](https://support.cpanel.net/hc/en-us/articles/1500012573002-How-to-create-an-FTP-account))
- Configuration of FTP Details in github repository secrets, if you don't know how to add one, you can check this [link](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository) out
You should have 3 mandatory repository secrets in your repo settings
- FTP_HOST
- FTP_USERNAME
- FTP_PASSWORD
## **Setting Up Cpanel**
Before proceeding, create an empty nodejs app in CPanel with the application startup file named as `server.js`, you can visit this [link](https://www.inmotionhosting.com/support/edu/cpanel/setup-node-js-app/) for reference
## **Process**
In your `next.config.js`, add the following line
```
distDir: "_next",
```
Create a file named as `server.js` in root of your directory and add the following code in it
```
const { createServer } = require("http");
const { parse } = require("url");
const next = require("next");
const dev = process.env.NODE_ENV !== "production";
const hostname = "localhost";
const port = process.env.PORT || 3000;
const app = next({ dev, hostname, port });
const handle = app.getRequestHandler();
app.prepare().then(() => {
createServer(async (req, res) => {
try {
const parsedUrl = parse(req.url, true);
const { pathname, query } = parsedUrl;
if (pathname === "/a") {
await app.render(req, res, "/a", query);
} else if (pathname === "/b") {
await app.render(req, res, "/b", query);
} else {
await handle(req, res, parsedUrl);
}
} catch (err) {
console.error("Error occurred handling", req.url, err);
res.statusCode = 500;
res.end("internal server error");
}
}).listen(port, (err) => {
if (err) throw err;
console.log(`> Ready on http://${hostname}:${port}`);
});
});
```
Create a file named `deploy.yml`, the path of the file should be
`/projectpath/.github/workflows/deploy.yml`
Look I am not going to take much of your time, so if you only want to know the code, here it is.
```
name: Deploy to FTP
on:
push:
branches:
- master # Adjust the branch name if needed
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v2
with:
node-version: 14.21.2 # Use the desired Node.js version
- name: Install dependencies and build Next.js app
run: |
yarn install # Install project dependencies
touch .env.local
echo MONGO_URL=${{ secrets.MONGO_URL }} >> .env.local
echo JWT_SECRET=${{ secrets.JWT_SECRET }} >> .env.local
yarn build # Build Next.js app
- name: Deploy via FTPS
uses: SamKirkland/FTP-Deploy-Action@4.3.3
with:
server: ${{ secrets.FTP_HOST }}
username: ${{ secrets.FTP_USERNAME }}
password: ${{ secrets.FTP_PASSWORD }}
local-dir: ./ # This points to the build output directory of your Next.js app
server-dir: /
protocol: ftps
timeout: 300000 # Set a longer timeout value in milliseconds (here, 2 minutes)
```
Lets breakdown steps
## Checkout Code
This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it.
## Set up Node.js
This step involves setting up your GitHub Actions workflow with a specific version of node.js, in our case, we set it up to 14.21.2, that's the lastest version upto which cpanel supports.
## Install dependencies and build Next.js app
In this step, we can run commands like yarn install/npm install to install the dependencies required to run our app. It also includes getting env values from repo secrets and setting them in `.env.local`
## Deploy via FTPS
This step involves deploying built code on the server using ftp account
| heyitsuzair | |
1,637,572 | La conferencia se ha celebrado en Anji, provincia china de Zhejiang | Yucun combina la ecoagricultura con el turismo, transforma minas abandonadas y cementeras en zonas... | 0 | 2023-10-17T14:48:27 | https://dev.to/jensenberry/la-conferencia-se-ha-celebrado-en-anji-provincia-china-de-zhejiang-10lo | Yucun combina la ecoagricultura con el turismo, transforma minas abandonadas y [cementeras](https://detailfinancial.com) en zonas turísticas, superpone el bambú, el té blanco y el turismo para crear productos agrícolas característicos, y convierte las granjas en granjas y casas de familia características. Como primer pueblo seleccionado como “Mejor Pueblo Turístico” por la Organización Mundial del [Turismo](https://dailyenquire.com) de las Naciones Unidas, recibió 700.000 turistas en 2022. La prioridad ecológica y el desarrollo verde están transformando la producción y la vida de las personas. En julio de 2022, “[Yucun](https://digitalfoundational.com) Global Partner Program” se puso en marcha, invitando a talentos globales para construir conjuntamente [Yucun](https://digitalreporterusa.com) y ampliar el desarrollo verde. | jensenberry | |
1,637,590 | Azure Storage services - Learning Day 1 | Hi, here I am sharing my learning for my better practices and storing here so that I can read it... | 0 | 2023-10-17T16:20:34 | https://dev.to/rashmiranjan28/azure-storage-services-learning-day-1-4f12 | azure | Hi, here I am sharing my learning for my better practices and storing here so that I can read it whenever I need.
If it helps you to learn or recall the fundamental things, I would be happy.
## cloud
> The practice of using a network of remote servers hosted on the internet to store, manage and process data, rather than a local server or a personal computer.
> In simple terms Hosting something on the internet and you are not managing on your local system.
Mainly 3 types of service provided by cloud.
**1. IaaS (Infrastructure as a service)**
IaaS provides the foundational infrastructure elements needed for running applications, such as virtual machines, storage, and networking.
Users can access and manage these resources remotely, eliminating the need for physical servers and data centers.
ex: azure, gcp, aws.
**2. PaaS (Platform as a service)**
PaaS offers a platform that includes not only infrastructure but also tools and services for developing, testing, and deploying applications.
It abstracts much of the underlying infrastructure, allowing developers to focus on writing code rather than managing servers.
ex: Microsoft Azure App Service, Azure Data factory, Data lake, etc.
**3. SaaS (Software as a service)**
SaaS delivers software applications over the internet on a subscription basis.
Users access the software through a web browser, and it is typically hosted and maintained by a third-party provider.
This model is user-centric, and users do not need to manage the underlying infrastructure or worry about updates and maintenance.
ex: Microsoft 365.
To store the data in azure there are a lot of service available. Anyone can see from the documentation.
But mostly we use,
> 1. Azure storage account
> 2. Azure SQL
> 3. Azure Data Lake
> 4. Azure cosmos DB
**Why we need this?**
To answer this question lets go back into 90's
when everything stored relational database where size of data is low.
At that time, we use rdbms systems for storing data and processing data.
But when we hit 20's there a lot of tech things come into the world.
The world explores different different technologies and slowly starts generating data in lots of amounts.
**So, the big data arises here.**
Then **3v's** come into the place which describes the key characteristics or challenges associated with the BIG data.
1. Velocity --> 1sec, 1hr
2. Variety --> structured, semi structured, unstructured
3. Volume --> 5GB, 10Gb, 30Gb, 1TB
One of these individuals can't be called as Big Data.
By combining these 3
> If 5gb,10gb or any large volume of different varieties of data is generating in every sec, hour, or day then we can call this as BIG DATA.
**what is data classification (remember this vocabulary)?**
**1. structured data**
These data have some schema like row, column.
Mostly in tabular format.
ex: SQL, csv, spreadsheet
**2. semi-structured data**
No-SQL, key-value pair, JSON.
**3. Unstructured data**
media files, office files, text files, log files.
Next article will be about azure storage services.
| rashmiranjan28 |
1,637,944 | Reverting Slack's New UI: A Guide for Developers Who Love Control | Hello fellow developers! Being a software engineer for the past 10 years, I've had my fair share of... | 0 | 2023-10-17T22:02:38 | https://dev.to/makepad/reverting-slacks-new-ui-a-guide-for-developers-who-love-control-2dii | tips, programming, tutorial, productivity |
Hello fellow developers!
Being a software engineer for the past 10 years, I've had my fair share of software updates, upgrades, and UI revamps. Some of them are warmly welcomed, while others make us question, "Why? Just... why?"
Recently, Slack decided to roll out a new UI, and let's say it wasn't my cup of coffee. The new change can be a bit frustrating for many of us juggling between multiple Slack servers (and I'm talking about working with two different companies simultaneously). But fret not, for the developer community is resourceful and never backs down from a challenge.
Thanks to the good folks over at Reddit, I stumbled upon a nifty little trick that helped me revert to the old UI, and I wanted to share this knowledge with all of you, especially if you're in the same boat as me.
## Let's dive in!
### Step 1: Enable Slack's Developer Menu
Before we begin, ensure that Slack is closed. To enable the developer menu, we need to set an environment variable. Open your terminal and type:
```bash
export SLACK_DEVELOPER_MENU=true
```
### Step 2: Launch Slack
Once you've set the environment variable, navigate to your applications and open Slack:
```bash
open /Applications/Slack.app
```
### Step 3: Access Developer Console
Here comes the fun part! Once Slack is up and running, you need to open the developer console. For this, simply press:
```
cmd + option + I
```
### Step 4: The Magic Spell (JavaScript)
With the developer console in front of you, copy and paste the following JavaScript code:
```javascript
localStorage.setItem(
"localConfig_v2",
localStorage
.getItem("localConfig_v2")
.replace(
/\"is_unified_user_client_enabled\":true/g,
'"is_unified_user_client_enabled": false',
),
);
```
Hit 'Enter', and voila! The old, familiar Slack UI should be back.
## Wrapping up
The beauty of the developer community is how quick we are at finding workarounds when things don't go as per our liking. And while I'm sure Slack had its reasons for the UI change, it's great to know that we aren't entirely helpless in such situations.
So, next time you're met with an unappreciated software change, remember this - there's probably a workaround out there. All you need is a little digging and some faith in the community.
Happy coding!
---
By the way, if you found this useful, drop a comment, share it with your peers, and let's continue making our software lives a bit more comfortable.
Cheers,
Kaan
---
**Note**: This method is based on community findings and might not work with future Slack updates. Always back up your important data before making any changes.
| kaanyagci |
1,638,027 | Hacktoberfest 2023 Pledge | This will be my sixth Hacktoberfest! | 0 | 2023-10-18T00:45:52 | https://dev.to/kayh/hacktoberfest-2023-pledge-2b3k | hacktoberfest23 | This will be my sixth Hacktoberfest! | kayh |
1,638,173 | How WhatsApp tests software? | I was reflecting on how the WhatsApp team (WA) tests its apps and what other teams across the globe... | 0 | 2024-05-25T05:52:48 | https://automationhacks.io/2023-10-18-how-whatsapp-tests-software | engineeringpractices, testautomation, metaengineering, softwareengineering | ---
title: How WhatsApp tests software?
published: true
date: 2023-10-18 00:00:00 UTC
tags: Engineeringpractices,Testautomation,MetaEngineering,SoftwareEngineering
canonical_url: https://automationhacks.io/2023-10-18-how-whatsapp-tests-software
---

I was reflecting on how the WhatsApp team (WA) tests its apps and what other teams across the globe can learn from their test engineering practices. This is also a recurring question that I get from many people, So let’s shed some light on this, shall we?
## Testing approach
I was a member of the WhatsApp Test Automation team in their London office for one and a half years between the end of 2021 and close to mid-2023 and I had the privilege to take an up close and personal look at how an app with 2B+ users is tested.
The WhatsApp Test Automation team was responsible for developing test infrastructure, tools, and frameworks to allow developers to write efficient tests for WhatsApp clients (notably Android, iOS, web, and desktop apps). It was a critical part of the testing strategy for the entire organization.
In some ways, the approaches are similar to any other consumer-facing mobile app with all the usual layers of the test pyramid. WhatsApp teams prefer to use many open-source tools to write their automated tests but also use many internal Meta tools and Infrastructure to supercharge their entire testing strategy. The testing approach for WA differs from meta apps quite significantly as well.
In this blog, I’ll talk about two key aspects that enabled this strong culture of automation
- Frameworks and Infrastructure
- Teams and personas that enable this
Let’s dive in 🏊
## Test Frameworks
Let’s first look at the different layers of the test pyramid and see what are the different tools and frameworks being used.
For mobile frameworks, Automated tests were written at all the layers of the test pyramid with a high no of espresso and XCUITests and a comparatively lower no of E2E tests.
### Android
- Unit tests: [JUnit](https://junit.org/junit5/), [Roboelectric](https://robolectric.org/)
- UI tests: [Espresso](https://developer.android.com/training/testing/espresso)
- Screenshot tests: [Screenshot tests for](https://github.com/facebook/screenshot-tests-for-android)Android ([read more about this here 🔗](https://facebook.github.io/screenshot-tests-for-android/))
- E2E tests: [Jest-based](https://jestjs.io/) internal E2E testing framework
### iOS
- Unit tests: [XCTest](https://developer.apple.com/documentation/xctest)
- UI tests: XCUI Tests
- Snapshot tests: Probably similar to [GitHub - uber/ios-snapshot-test-case: Snapshot view unit tests for iOS](https://github.com/uber/ios-snapshot-test-case/)
- E2E tests: [Jest-based](https://jestjs.io/) internal E2E testing framework
## Test Infrastructure
WhatsApp relied on a lot of Internal meta infrastructure for its device testing and CI needs.
### Code coverage
- Internal infra to collect unit and UI tests code coverage
- Internal dashboards to visually trend and metrics around code coverage
### Version control
- [Git](https://git-scm.com/)
- [Mercurial](https://www.mercurial-scm.org/)
### Test runner
- Internal test runner
### Devices and emulator
- Internal system
### Build
- Android: [Gradle](https://gradle.org/), iOS: [Xcode](https://developer.apple.com/documentation/xcode/build-system)
- [Buck2](https://github.com/facebook/buck2)
### Test Reporting:
- Internal logging infrastructure
- Test monitoring in [scuba](https://research.facebook.com/publications/scuba-diving-into-data-at-facebook/)
- Internal reporting infrastructure
### CI:
- Internal CI (read a [blog about it here](https://engineering.fb.com/2017/08/31/web/rapid-release-at-massive-scale/))
### Testing services
- [Test selection](https://engineering.fb.com/2018/11/21/developer-tools/predictive-test-selection/)
- Test recommendation
- Monkey testing with [Sapienz](https://engineering.fb.com/2018/05/02/developer-tools/sapienz-intelligent-automated-software-testing-at-scale/)
- Static analysis with [Infer](https://github.com/facebook/infer)
## A buffet of tools/frameworks 🧑🍳
It’s quite fascinating to see that there is such a rich set of tools and frameworks all supported by scalable infrastructure. Engineers thus spend less time maintaining this or fighting the infra and rather focus on writing their tests knowing fully well that it would run and scale as long as they write tests by following recommended practices.
You may have also noticed that most of these tools are used in many companies and startups and there are open-source solutions available to address these problems as well.
It’s just that at Meta the internal tools are heavily optimized through the virtue of significant engineering time over the years and integrate quite well with each other to create an ecosystem. It’s also a slight learning curve to understand all of this and the Test automation Infra team had the privilege to be right in the middle of it all.
## So, How does WhatsApp “really” test its software?

At this point, you might be thinking, hmm, so WhatsApp uses a bunch of tools to enable automated testing, and may be wondering how these different teams and engineers are organized.
Let’s look at the testing setups and the major personas involved.
### Engineers own writing automated tests
At WhatsApp, developers write the bulk of the automated tests themselves.
They write app code, along with unit and UI tests and occasionally some E2E tests as well. I had previously written a short note on this cultural practice, feel free to read about it [here](https://automationhacks.io/2023-07-09-meta-engineering-practices-engineers-write-tests).
Here, I’m primarily referring to mobile engineers since WhatsApp being a chat app is very client-heavy. There are server teams that follow similar philosophies and write their tests themselves.
In some respects, this is quite nice, since engineers are responsible for their tests and also for any SEVs (production incidents), they are naturally motivated to write appropriate levels of automated tests to cover their code and mitigate any risks.
I did not see any dedicated automation team of SDET/SET (Software engineer in tests) staffed in feature teams to take up writing automated tests which are otherwise quite common in start-ups and many other companies.
Some natural questions arise on this practice:
- What are the 1st and 2nd order impacts of such an approach?
- And what about tooling and infra. Someone needs to maintain all that right?
- Are tests written by engineers of higher quality compared to being written by dedicated testing specialists?
Well, You are right on the money to question this.
#### Pros 👍
- One of the benefits of this is that in general a lot of automated tests get written.
- If they are of bad quality (i.e. flaky and unreliable) then the infra itself takes care to remove them from running executions and notifies the author via a task system.
- Another benefit is that there is no bottleneck on a dedicated QA team to write all the automated tests
- Engineers don’t worry about building tools and infra and just make use of existing stuff to author tests
- Flaky tests get fixed faster (since more engineers are looking at that)
- Engineers understand testing practices better and in this case “Quality is everybody’s responsibility” translates to something meaningful rather than posters on a wall
#### Cons 👎
- Tests sometimes do not have robust assertions
- Tests have a lot of duplicated code without much care of putting up the right abstractions. Since engineers view writing a test as just another task on their checklist to ship a feature, they do not pay huge attention to building good reusable abstractions
- A lot of flaky and broken tests are written
### Infra teams build tools and frameworks for testing
At WhatsApp, multiple Infra teams are responsible for just that.
Their charter involves thinking about testing strategy holistically, both in-depth and in breadth. These teams do not write tests. But they help build sufficient levels of test infra and tools to support developers writing their tests.
They also worry and care about the quality of the app deeply and build solutions to make life easier for developers. They are seen as Test Automation champions and experts whom dev teams rely upon for guidance and for building any specialized solutions for their specific needs. They also build educational materials like internal docs and courses on Test automation.
The need to have this dedicated group of individuals is super clear from my viewpoint.
Why?
Glad you asked. 😉
A developer tasked with feature work and automated testing will barely have time to build systematic solutions to testing problems that scale well. An Infra dev does not worry about writing tests or day-to-day feature delivery and can solely focus on building tools (regardless of the testing fidelity it targets). Since they build tools for not just one team, it also leads to better and more generic solutions that can be widely applicable.
The key sauce is a leadership team that supports and sees value in this to secure buy-in and funding. Without that, such a team cannot prosper and at WhatsApp, we were fortunate to have very strong leaders behind this team.
### Prod Ops team support infra teams
Now while Infra devs are solid software engineers, some having prior backgrounds in being full-time mobile and backend software developers, they very occasionally come from pure testing backgrounds (QA or SDET).
They know how to build infra and tooling solutions but don’t have deep connections with feature teams to understand all the nitty gritty of SDLC or spend a lot of time playing with different test frameworks and approaches.
The Prod Ops team is a team that filled this particular gap. It was staffed with engineers called **TQS (Technical Quality Specialists)**
This team is in some ways similar to a traditional QA team responsible for leading the quality charter across the product. There are a few Android/iOS developers in this group as well.
They are responsible for a bunch of important activities.
- They are experts in exploratory testing and understand the product and flow deeply. They would focus on the big bets for the product and ensure all the right things happen in terms of testing and quality, think of features like channels, avatars, etc.
- They support Infra and feature teams by:
- Doing Test design and suggesting test cases, prioritizing them in terms of value
- Ensuring healthy test suites by removing redundant cases
- Supporting production testing and experimentation
- Triaging Customer issues and bringing feedback to product and leading bug triages
- Building insightful dashboards into the state of quality
- They sometimes author automated E2E tests as well and help with test flakiness analysis
They are a valued and core team with complementary skills that enable WhatsApp’s culture of quality and in my opinion, an essential glue that ties the whole thing together and enables this setup to work well. This team also has strong leadership support for its charter.
### Embedded Test leads within Feature teams
In addition to these, WhatsApp designated an engineer from each feature team as a Test lead role who was responsible for reviewing the test plan for the current quarter, coming up with areas that lacked coverage, and rallying their developers to write tests for those.
They would also be in sync with the Test automation team to develop know-how of current tools and features under development and help drive awareness to them within their own teams.
There was a regular meeting setup where the Test lead would present their plans, gather feedback, and then go back and help drive the execution. The whole project was managed by a Leadership SWE within WhatsApp.
In my opinion, generating such thought leadership within Teams was a catalyst in ensuring WhatsApp’s commitment to quality was upheld and very stable releases went out.
## Conclusion
To summarize:
- WhatsApp makes use of automated tests to drive quality and test coverage
- WhatsApp engineers write their own tests
- The test automation team helps build Infra and tools to support engineers
- Prod Ops QA supports Infra and feature teams by leading the quality charter
- Dedicated thought leadership around Test engineering by developers helps drive this
- Strong leadership support towards these teams, and staffing them with talented and senior engineers helps in moving the needle in the right direction
Did you notice how all these teams and functions work towards the common goal of shipping high-quality code faster? Test automation and Quality are gigantic spaces and require a lot of nuance and skills to ensure the right strategic bets make their way into the development workflows.
In summary, it does not take a few individuals but a whole village to care about Quality and investment from leadership in enabling such setups.
Hope this was helpful. Let me know in the comments if you have thoughts or any questions about this.
| Thanks for the time you spent reading this 🙌. If you found this post helpful, please share it with your friends and follow me ( **@automationhacks** ) for more such insights in Software **Testing** and **Automation**. Until next time, Happy Testing 🕵🏻 and Learning! 🌱 | [Newsletter](https://newsletter.automationhacks.io/) | [YouTube](https://www.youtube.com/@automationhacks) | [Blog](https://automationhacks.io/) | [LinkedIn](https://www.linkedin.com/in/automationhacks/) | [Twitter](https://twitter.com/automationhacks). | | automationhacks |
1,638,275 | The Three Waves of XDR – Open XDR delivers and extends the value of existing investments | We asked CIOs and CISOs what keeps them up at night, and the two main concerns are reducing... | 0 | 2023-10-18T07:21:33 | https://dev.to/stellarcyber/the-three-waves-of-xdr-open-xdr-delivers-and-extends-the-value-of-existing-investments-37ne |

We asked CIOs and CISOs what keeps them up at night, and the two main concerns are reducing security risks and improving analyst confidence and productivity. CxOs must report to corporate boards, and members of those boards are getting smarter about asking probing questions about the company’s security posture. CxOs need answers to those questions, and [XDR solutions](https://stellarcyber.ai/platform/what-is-open-xdr/) can help a lot, but there’s more to be done.
[XDR adoption](https://stellarcyber.ai/platform/what-is-open-xdr/) comes in waves as CxOs respond to pressing needs in the [SecOps center](https://stellarcyber.ai/enterprise/why-stellar-cyber-enterprise-stand-up-secops/). In the first wave, XDR was all about getting visibility across the whole attack surface and out-of-box detections and correlating alerts automatically to reduce the burden on analysts. By grouping alerts together through leveraging AI and ML technologies from multiple tools, XDR helps eliminate attack detection delays because analysts can see related alerts on one console instead of having to track multiple consoles for multiple tools. In fact, our [Open XDR platform](https://stellarcyber.ai/platform/what-is-open-xdr/) makes it especially easy to focus on the important alerts because it automatically groups them into actionable, contextual incidents.
The second wave of XDR was about automating responses. Now with [AI / ML](https://stellarcyber.ai/platform-ai-engine/) building a baseline of known threat and contextual situations, [XDR systems](https://stellarcyber.ai/stellar-cybers-open-xdr-step-into-security/) can automatically take protective actions such as shutting down firewall ports by communicating directly to [cybersecurity systems](https://stellarcyber.ai/platform/capabilities-ndr/). This not only further improves analyst productivity, but it also reduces risk by stopping understood and characterized intrusions more quickly than humans can act. This is the natural next step in [XDR adoption](https://stellarcyber.ai/xdr-will-converge-from-different-directions-xdr-open-xdr-native-xdr-hybrid-xdr-xdr/) and is helping to improve the benefits of productivity and confidence. In our [Open XDR platform](https://stellarcyber.ai/managed-security-providers-driving-profitable-mdr-services-with-stellar-cyber-open-xdr-platform/), there are pre-defined playbooks for taking automatic protective actions, and our customers can easily write new ones of their own.

The third wave of XDR is all about reducing future risks. Picture the CxO in a boardroom fielding such questions as, “What are we doing about ransomware?” or “Where are the key risks today and what are we doing about them?” CxOs need predictive analysis to identify the weak spots in their [security infrastructure](https://stellarcyber.ai/platform/platform-threat-intelligence/).
Security teams can do root cause analysis today, but that’s a reactive response to attacks that have already occurred. People need continuous evaluation of their security postures to proactively reduce their risks, especially when it comes to certain high-impact attacks, such as ransomware attacks. Do we have sufficient information collected to detect the attack? Are there any vulnerabilities that may be exploited? Are there any misconfigurations that make it easier for the attackers to get in.
At Stellar Cyber, we work every day to expand the power of our Open XDR platform, to ensure your investments are leveraged and extended, and to prepare for the third [wave of XDR](https://stellarcyber.ai/platform/what-is-open-xdr/) as you read this. Stay tuned for more developments. | stellarcyber | |
1,638,306 | List of Best websites to get Python homework help | Python, with its versatile applications and user-friendly syntax, has become a staple in the world of... | 0 | 2023-10-18T08:07:47 | https://dev.to/shimlawalarahul/list-of-best-websites-to-get-python-homework-help-43l8 | Python, with its versatile applications and user-friendly syntax, has become a staple in the world of programming. However, like any language, it can pose challenges, especially for those just starting their coding journey. Whether you're a student grappling with a complex assignment or a professional seeking clarity on a specific Python concept, there's a wealth of online resources available. In this guide, I'll be sharing a curated list of the best websites to turn to for Python homework help.
Navigating the vast ocean of online Python resources can be daunting. Hence, I've sifted through countless platforms to bring you the crème de la crème of Python help websites:
-
**Stack Overflow**: A programmer's haven, Stack Overflow offers a community-driven platform where you can pose questions and get answers from experienced developers. With a robust Python tag, you're likely to find solutions to even the most niche problems.
-
**GeeksforGeeks**: This comprehensive platform not only provides tutorials but also has a dedicated section for Python programming. Their articles often break down complex concepts into digestible chunks, making it easier to grasp challenging topics.
-
**Codingparks.com**: Codingparks is a homework help and assignment help website. It provides tutorials on many programming languages and training also. You can hire codingparks.com for Python Homework Help and to get any [help with Python Homework](https://codingparks.com/python-homework-help/).
-
**Reddit (r/learnpython)**: The Python community on Reddit is both active and supportive. Whether you have a specific homework question or need guidance on where to start, the members of r/learnpython are always ready to assist.
-
**Python.org**: The official Python website has an extensive documentation section. While it might seem a bit technical for beginners, it's an invaluable resource once you get the hang of things.
-
**Codementor**: If you're looking for one-on-one help, Codementor connects you with experienced Python developers for personalized assistance. It's particularly useful for in-depth guidance or when you're stuck on a tricky problem.
-
**Tutor.com**: Catering specifically to students, Tutor.com offers live homework help, and their Python programming section is manned by experts ready to assist with assignments and projects.
If you need Python Homework Help and Python Assignment help then the best and most trusted website is: [Codingparks - Python Homework Help](https://codingparks.com/python-homework-help/)
Thanks! | shimlawalarahul | |
1,638,411 | ZenGPT: a simple ChapGPT alternative frontend | Originally posted on cri.dev I've been playing around with a home-made, super simple ChatGPT UI... | 0 | 2023-10-18T09:21:47 | https://cri.dev/posts/2023-10-17-zengpt-chapgpt-alternative-frontend-opensource-self-hosting/ | chatgpt, javascript, htmx, alpine | ---
title: "ZenGPT: a simple ChapGPT alternative frontend"
cover_image: https://cri.dev/assets/images/posts/zengpt.jpeg
tags:
- chatgpt
- javascript
- htmx
- alpine
published: true
canonical_url: https://cri.dev/posts/2023-10-17-zengpt-chapgpt-alternative-frontend-opensource-self-hosting/
---
**Originally posted on [cri.dev](https://cri.dev/posts/2023-10-17-zengpt-chapgpt-alternative-frontend-opensource-self-hosting/)**
I've been playing around with a home-made, super simple ChatGPT UI clone, mainly with the excuse to try out [htmx and Alpine.js](https://cri.dev/posts/2023-10-16-functions-as-views-javascript-node-javascript-template-htmx-alpine/)
It's a fun little project that I've been working on for a few days.
I've been programming it on an [iPad Pro (as my main device)](https://cri.dev/posts/2023-03-22-ipad-programming-github-codespaces-raspberry-pi-vscode/), and it's been a fun experience.
In this post I want to get deeper into the technical details of the project, and share some of the things I've learned while working on it.
--

## node.js server and bare functions as views
The other day I wrote about [functions as views](https://cri.dev/posts/2023-10-16-functions-as-views-javascript-node-javascript-template-htmx-alpine/), and how I've been using them in this project.
Namely, I'm using the native `http` node.js module, a simple home made router with if statements and a few functions that return HTML strings.
The functions are called with optional additional data and they return a string that is sent back to the client.
E.g.
```javascript
if (req.url === '/') {
res.setHeader('Content-Type', 'text/html')
return res.end(mainView(messages, listing()))
}
```
## Integrating with OpenAI's API
The [Completions API](https://platform.openai.com/docs/api-reference/completions) is pretty straightforward too:
You can use the `chat.completions.create` method to send the conversation to the API, and get back a text completion (llm response/message).
```javascript
const completion = await ai.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: messages
.concat([{ role: 'user', content: newUserMessage }]),
})
let llmText = completion.choices[0].message.content.trim()
```
## htmx
As mentioned before, the main reason I started this project was to try out [htmx](https://htmx.org/) (and Alpine.js)
The coolest thing I refreshed during this excursus was the concept of rethinking what we consider RESTful APIs (spoiler: they are actually HTTP JSON RPC APIs), HATEOAS, hypertext, and much more honestly. The htmx website is a goldmine of resources.
In short: our websites and "RESTful" APIs should be way more discoverable (for humans, not machines) and self-contained.
By making use of existing powerful technology like HTML and HTTP, with sprinkles of JavaScript, to make the web more accessible, lightweight, more reliable and easier to use.
But let's got back to [htmx](https://htmx.org/).
> build modern user interfaces with the simplicity and power of hypertext
The emphasis here is leveraging the power of HTML.
E.g. in this small project, the client is loaded with a simple HTML page, preloaded with messages from the server.
The rest (sorry for the poor choice of words) is done by the client, that makes requests to load small snippets of HTML, and updates the DOM with the response.
This is an oversimplification, but it's the gist of it.
By using a declarative approach on the HTML, you can get a quite robust and powerful UI, with very little code.
The main focus of ZenGPT is the UI, this is the input and conversation part:
```html
<div id="messages" hx-swap="scroll:bottom">
${renderMessages(messages)}
</div>
<input
name="message"
hx-post="https://cri.dev/chat"
hx-trigger="keyup[keyCode==13]"
hx-target="#messages"
hx-swap="beforeend scroll:bottom"
hx-indicator="#loading-message"
hx-on:htmx:before-request="this.disabled=true"
hx-on:htmx:after-request="this.disabled=false;setTimeout(() => this.focus(), 20)"
x-bind:disabled="messageDisabled"
x-ref="message"
x-model="message"
x-on:keyup.enter="setTimeout(() => {message = '';pristineChat = false}, 10)"
class="my-message" autofocus type="text" placeholder="your message">
<div style="position:fixed;bottom:3em;right:2em;" class="htmx-indicator" id="loading-message">
<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="64" height="64" viewBox="0 0 24 24">
<path fill="none" stroke="#000" stroke-width="2" stroke-linecap="round" d="M12 2 L12 6 M12 18 L12 22 M4.93 4.93 L7.76 7.76 M16.24 16.24 L19.07 19.07 M2 12 L6 12 M18 12 L22 12"></path>
</div>
```
This is so-called "no-javascript", where the JS is simply hidden (remember you need to include a `<script src="https://cri.dev//unpkg.com/htmx.org">`). I think it's a refreshing approach.
## Alpine.js
In ZenGPT, Alpine.js is used to manage the state of the UI, and to make client-side only UI changes and updates.
It is used to add interactivity to the UI.
E.g. it handles the display state of the action buttons in the header
```html
<div style="display:flex">
<div style="flex:1";><h1>zengpt</h1></div>
<div x-show="!pristineChat" style="flex:1;";>
<button style="display:block;padding:1rem;font-size:1.5rem;" hx-delete="https://cri.dev/chat" hx-target="#messages" x-on:click="$refs.message.focus();messageDisabled=false;pristineChat=true">
new chat
</button>
</div>
<div x-show="!pristineChat" style="flex:1;";>
<button style="display:block;padding:1rem;font-size:1.5rem;" hx-post="https://cri.dev/chats" hx-target="#messages" x-on:click="$refs.message.value = '';messageDisabled=false;pristineChat=true">
save chat
</button>
</div>
<div x-show="viewingPreviousChat" style="flex:1;";>
<button style="display:block;padding:1rem;font-size:1.5rem;" hx-get="https://cri.dev/chat" hx-target="#messages" x-on:click="$refs.message.value = '';messageDisabled=false;">
go back
</button>
</div>
</div>
```
## open source
You can find the project on [github.com/christian-fei/zengpt](https://github.com/christian-fei/zengpt)
**Originally posted on [cri.dev](https://cri.dev/posts/2023-10-17-zengpt-chapgpt-alternative-frontend-opensource-self-hosting/)** | christianfei |
1,638,439 | Enter the World of Blockchain Gaming with Axie Infinity Clone Script | Certainly! Blockchain technology has changed many industries, including gaming. There's a popular... | 0 | 2023-10-18T10:01:49 | https://dev.to/josephinesaro/enter-the-world-of-blockchain-gaming-with-axie-infinity-clone-script-3gke | webdev, axie, axieinfinityclone | Certainly! Blockchain technology has changed many industries, including gaming. There's a popular game called Axie Infinity that uses blockchain. This article is here to tell you about blockchain gaming and a tool called Axie Infinity Clone Script. With this tool, you can make your version of the game.
**What is Blockchain Gaming?**
Blockchain gaming merges blockchain tech with video games for fun. It uses decentralized networks for transparency, security, and ownership of in-game items. Unlike regular games, where you don't have much control over your virtual stuff, blockchain gaming lets you truly own and trade your items.
**Axie Infinity: A Revolutionary Game**
Axie Infinity is a top example of blockchain gaming. It's a game with cute creatures called Axies. These Axies are digital pets that can be bred, battled, and traded in the game. The game uses special tokens called non-fungible tokens (NFTs) on the Ethereum blockchain to show each Axie and its special qualities.
**Benefits of Axie Infinity Clone Script**
As Axie Infinity becomes more popular, many entrepreneurs and developers want to make their blockchain games. That's where the Axie Infinity Clone Script comes in. It offers a pre-made solution to create a game like Axie Infinity. Here are some advantages of using the clone script.
**Cost-Effective:**
The process of creating a game from scratch takes a lot of time and money. The Axie Infinity Clone Script saves developers a lot of time and money because it provides a ready-made framework.
**Customizability:**
The clone script lets developers customize different parts of the game like design, rules, and features. This way, they can make the game fit their specific ideas and the people they want to play it.
**Quick Deployment:**
Using the clone script, developers can launch their blockchain game fast because it already has all the necessary features and mechanics from Axie Infinity.
**Seamless Integration:**
The clone script ensures seamless integration with blockchain networks, enabling the secure ownership and trading of in-game assets.
**Conclusion:**
Blockchain gaming is a thrilling trend that gives players real ownership of their virtual items. Axie Infinity is a big name in this trend. Developers can make their version of this exciting game using the Axie Infinity Clone Script. Blockchain technology is changing gaming, offering players special experiences and chances to start businesses. Step into the world of blockchain gaming with the Axie Infinity Clone Script and explore the potential of this groundbreaking technology.
The Axie Infinity similar games are a good business opportunity for entrepreneurs. And a lot of entrepreneurs are entering to develop the Axie Infinity clone script. You are an interested person to start your own Axie Infinity-like gaming platform? This is the best time for development. Because we Fire Bee Techno Services. We are the best **[Axie Infinity Clone Script](https://www.firebeetechnoservices.com/axie-infinity-clone-script)** providers And we have a talented tech team. We have many blockchain gaming development experiences. So change your lifestyle with us.
Whatsapp/Telegram: +91 73975 71188
Mail: business@firebeetechnoservices.com
| josephinesaro |
1,638,537 | Rails Core AMA - Rails World 2023: Hosted by...me! | As a contributing member of the Ruby on Rails Foundation, Planet Argon is delighted to collaborate... | 0 | 2023-10-18T11:43:45 | https://dev.to/planetargon/rails-core-ama-rails-world-2023-hosted-byme-355b | ruby, rails | As a [contributing member](https://blog.planetargon.com/blog/entries/news-planet-argon-joins-the-rails-foundation) of the [Ruby on Rails Foundation](https://rubyonrails.org/foundation), Planet Argon is delighted to collaborate with the Rails Foundation Core members and other contributing members to improve and maintain the future of the Rails community for all of us.
In a remote work world, getting together in person is challenging, so we can't pass it up when an opportunity arises!
Recently, I headed out to Amsterdam for [Rails World 2023](https://rubyonrails.org/world)!
Besides listening to insightful keynote talks and connecting with new and seasoned Rails practitioners, I was invited to host the Rails Core AMA Panel. Ten current 12 Rails Core members sat down to answer questions submitted by the Rails World community in the days leading up to the event.
In case you missed it, here’s a replay of the panel discussion.
{% embed https://www.youtube.com/watch?v=9GzYoUFIkwE %}
---
_originally posted on [blog.planetargon.com](https://blog.planetargon.com/blog/entries/rails-core-ama-rails-world-2023-hosted-by-robby-russell)._ | robbyrussell |
1,638,982 | Front-End: Qual framework escolher? Por onde começar? | Desenvolver aplicativos web modernos exige a escolha de um framework de frontend que atenda às... | 0 | 2023-10-18T18:01:31 | https://dev.to/raynneandrade/front-end-qual-framework-escolher-por-onde-comecar-1e2p | webdev, javascript, react, vue |
Desenvolver aplicativos web modernos exige a escolha de um framework de frontend que atenda às necessidades do projeto. Entre as opções mais populares, React, Angular e Vue.js se destacam. Neste artigo, faremos uma análise comparativa entre esses frameworks para ajudar você a tomar uma decisão.
## Popularidade e Comunidade
React:
React é mantido pelo Facebook e é amplamente utilizado em grandes empresas. Ele tem uma das maiores comunidades de desenvolvedores e uma abundância de bibliotecas e pacotes de terceiros. Isso torna fácil encontrar suporte, recursos e soluções para problemas comuns.
Angular:
Angular é desenvolvido pela equipe do Google e é amplamente utilizado em aplicativos empresariais. Sua comunidade é robusta e oferece suporte para questões complexas. No entanto, a curva de aprendizado é mais íngreme devido à sua arquitetura complexa.
Vue.js:
Vue.js é mantido por Evan You e uma comunidade ativa de desenvolvedores. Embora seja menor em comparação com React e Angular, sua comunidade cresceu rapidamente. Vue.js é conhecido por ser amigável aos iniciantes e ter uma curva de aprendizado suave.
## Arquitetura
React:
React é uma biblioteca para a criação de interfaces de usuário. Ele se concentra em componentes reutilizáveis e na renderização eficiente. Para gerenciar o estado, React se integra bem com bibliotecas como Redux e Mobx.
Angular:
Angular é um framework completo que segue o padrão de arquitetura Model-View-Controller (MVC). Ele oferece uma estrutura rígida para o desenvolvimento, incluindo módulos, serviços e injeção de dependência.
Vue.js:
Vue.js é um framework progressivo que permite que você comece com o mínimo necessário e adicione funcionalidades à medida que necessário. Ele tem um sistema de componentes, roteamento e gerenciamento de estado embutidos.
## Flexibilidade
React:
React é altamente flexível e permite a integração com várias bibliotecas e ferramentas externas. Isso dá aos desenvolvedores a liberdade de escolher as soluções que melhor se adequam ao projeto.
Angular:
Angular é menos flexível devido à sua arquitetura rígida. Isso pode ser uma vantagem em projetos grandes e complexos, mas pode ser excessivo para projetos menores.
Vue.js:
Vue.js é conhecido por sua flexibilidade. Ele é uma escolha excelente para projetos pequenos e grandes, permitindo que os desenvolvedores escolham o que usar e quando usá-lo.
## Ecossistema
React:
React possui um vasto ecossistema com muitas bibliotecas e ferramentas, incluindo React Router para roteamento e Redux para gerenciamento de estado.
Angular:
O ecossistema do Angular é maduro e inclui recursos como Angular CLI, Angular Universal para renderização no servidor e RxJS para gerenciamento de fluxo de dados.
Vue.js:
O ecossistema do Vue.js é crescente e inclui ferramentas como Vue Router, Vuex para gerenciamento de estado e Vue CLI para geração de projetos.
## Conclusão
A escolha entre React, Angular e Vue.js depende das necessidades específicas do seu projeto, da equipe de desenvolvimento e da curva de aprendizado. React é ideal para flexibilidade e eficiência de renderização, Angular é apropriado para aplicativos empresariais complexos e Vue.js é uma escolha sólida para projetos de diversos tamanhos, especialmente para iniciantes.
Lembre-se de que não existe uma resposta única para a melhor escolha de framework, pois tudo depende do contexto do seu projeto e das preferências da sua equipe. Avalie suas necessidades e considere as características de cada framework ao tomar uma decisão informada.
| raynneandrade |
1,639,065 | How do I check if a bundle is installed in Netsuite? | Sometimes you want to check for the existence of another bundle prior to performing a feature or process. This shows you how. | 0 | 2023-10-18T19:28:23 | https://dev.to/smith288/how-do-i-check-if-a-bundle-is-installed-in-netsuite-1mc0 | ---
title: How do I check if a bundle is installed in Netsuite?
published: true
description: Sometimes you want to check for the existence of another bundle prior to performing a feature or process. This shows you how.
tags:
cover_image: https://media.tenor.com/Nfct9RreQfUAAAAd/dog-meme.gif
# Use a ratio of 100:42 for best results.
# published_at: 2023-10-18 19:17 +0000
---
_"I want to see if the customer has Ship Central installed prior to doing my cool feature. How do I do that?"_
In a word, simple. You search for a custom record type of a record that would exist in the bundle you want to check for.
In my example, I would like to see if the customer has the Netsuite Ship Central bundle installed. All you have to do is use the little known search type called 'customrecordtype' against the column 'scriptid'.
Check out this function:
```
function doesRecordExist(scriptId) {
var customRecordTypeSearch = search.create({
type: "customrecordtype",
filters: [["scriptid", "is", scriptId]],
columns: ["scriptid"]
});
var searchResult = customRecordTypeSearch.run().getRange({
start: 0,
end: 1
});
custom_exists = searchResult.length > 0;
return custom_exists;
}
```
And you can run it like this:
```
if(doesRecordExist('customrecord_packship_shipmanifest')){
// DO SOMETHING COOL
}
```
Give it a whirl. It's a great method to verify something exists prior to enabling a feature in your own bundle.
| smith288 | |
1,639,090 | Exploring Different Types of Software Development Roles and How to Get Ahead in Each | Discover the different types of software development roles and what aligns best with your skills, interests, and aspirations in this complete guide. | 0 | 2023-10-18T20:20:55 | https://code.pieces.app/blog/different-types-of-software-development-roles | <figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/types-of-software-dev_849a39b64c4006cf9c1f023281433ecd.jpg" alt="Exploring Different Types of Software Development Roles."/></figure>
Software development is a vast and exciting field, teeming with opportunities and various career paths. Whether you're new to coding or considering a career transition, you've likely realized that there's no one-size-fits-all roadmap. That's why you're here—to navigate the maze of career possibilities and discover what aligns best with your skills, interests, and aspirations.
This article explores the following types of software development careers:
- Full-stack software engineer
- Frontend/backend software engineer
- Cybersecurity professional
- Systems developer
- Researcher
- DevOps engineer
- Data engineer
- Test engineer
- Mobile engineer
- Developer advocate
For each of these software development roles, you'll learn about prerequisites, methods for skilling up, finding your first job, and climbing the career ladder. The article also explores how a tool called [Pieces for Developers](https://pieces.app/) can revolutionize your career development. Read on to unlock the keys to your future in software development.
## Full-Stack Software Engineer
A full-stack software engineer is skilled in both frontend and backend development. They are, essentially, a software development Swiss Army knife. They are capable of [building complete web applications](https://code.pieces.app/blog/building-a-flutter-web-app-from-scratch-a-complete-guide), from user interfaces to databases and server infrastructure.
### Prerequisites
- Comfort working with both the frontend (what users interact with) and the backend (server, database).
- Proficiency in languages like JavaScript (and frameworks like React or Angular), Python, or Ruby.
- Understanding of databases, both relational like MySQL and NoSQL like MongoDB.
- Familiarity with version control (e.g., Git) and basic design principles.
### How to skill up
- **Courses:** Online platforms like [Codecademy](https://www.codecademy.com/learn/paths/full-stack-engineer-career-path), [Udemy](https://www.udemy.com/topic/full-stack-web-development/), and [Coursera](https://www.coursera.org/courses?query=full%2520stack%2520web%2520development) offer comprehensive full-stack development courses.
- **Real-world experience:** Try engaging in personal projects or internships. This hands-on experience can often be more beneficial than just theoretical knowledge.
- **Open source contribution:** Platforms like [GitHub](https://github.com/) offer numerous open source projects. Contributing to these can help you understand real-world codebases and team collaboration.
### How to get your first job
- **Building a portfolio:** Create a personal website showcasing your full-stack projects. This will be your visual resume.
- **Networking:** Attend meetups, webinars, and workshops. Often, your first job opportunity comes through networking. You can find meetups specific to your location using [Meetup.com](https://meetup.com).
- **Internships and junior positions:** Startups and tech companies frequently offer roles in software development for budding full-stack developers.
### Career advancement pathway
- **Specialization:** With a broad base, you can later choose to specialize (in backend technologies, for example).
- **Team leadership:** With experience, guiding a software development team or even transitioning into a project management role becomes a possibility.
## Frontend/Backend Software Engineer
Frontend engineers focus on building intuitive user interfaces and crafting the best user experience, while backend engineers handle server-side logic, databases, and application infrastructure. Both these types of software developers are essential team members of any tech startup or large-scale organization.
### Prerequisites
- **Frontend:** Mastery over [HTML](https://code.pieces.app/blog/introduction-to-html), CSS, and [TypeScript or JavaScript](https://code.pieces.app/blog/typescript-vs-javascript-making-the-right-choice-for-your-project). Proficiency in frameworks like React, Vue, or Angular.
- **Backend:** Grasp of server-side languages like Java, Python, or Node.js. Familiarity with database management and server deployment.
### How to skill up
- **Courses:** Specialized courses on platforms like [freeCodeCamp](https://www.freecodecamp.org/) or [Pluralsight](https://www.pluralsight.com/) can be beneficial. Read this guide for [important tips for software engineering students](https://code.pieces.app/blog/tips-for-software-engineering-students), if you’re starting early.
- **Projects:** Front end development professionals can update the UI design of existing websites or software applications for practice, while backend developers can create Application Programming Interfaces (APIs) or work on high quality server deployments.
- **Community engagement:** Engage on forums like [Stack Overflow](https://stackoverflow.com/) or [Reddit](https://www.reddit.com/), which offer practical insights and solutions to real-world problems.
### How to get your first job
- **Portfolio and GitHub:** Similar to other roles in a software development team, having a strong portfolio and active [GitHub](https://github.com/) repository can set you apart.
- **Freelancing:** Websites like [Upwork](https://www.upwork.com/) or [Freelancer](https://www.freelancer.com/) offer short-term gigs that help you gain experience and build a portfolio.
### Career advancement pathway
- **Specialized training:** Continuous learning is vital for all software development types. As technology evolves, keeping up-to-date with [future AI tools](https://code.pieces.app/blog/future-ai-tools-going-from-unknown-to-unstoppable), the latest trends, and methodologies is crucial. For instance, as a frontend engineer, you could learn more about [responsive web design](https://code.pieces.app/blog/foundation-the-best-framework-for-building-responsive-sites), [web accessibility](https://code.pieces.app/blog/improve-website-accessibility), progressive web apps, and so on, while as a backend engineer, you could learn more about microservices architecture, serverless architecture, etc.
- **Consultancy:** After gaining ample experience, many developers opt to advise companies on their areas of expertise.
## Cybersecurity Professional
In an age where data breaches can make or break companies, cybersecurity professionals are crucial. Unlike other software developer roles and responsibilities, cybersecurity professionals are dedicated to protecting systems, networks, and data from cyberthreats, ensuring the integrity, confidentiality, and availability of information.
### Prerequisites
- Strong foundational knowledge of networks, systems, and computer architectures.
- Familiarity with common cybersecurity tools and practices, as well as languages like [Python or Golang](https://code.pieces.app/blog/python-vs-golang).
- Certifications. While they're not mandatory for any software developer types, they're often recommended. Examples include [CompTIA Security+](https://www.comptia.org/certifications/security) or [Certified Ethical Hacker](https://www.eccouncil.org/train-certify/certified-ethical-hacker-ceh/) (CEH).
### How to skill up
- **Courses:** Platforms like [Cybrary](https://www.cybrary.it/) or [Infosec](https://www.infosecinstitute.com/) offer courses ranging from the basics to advanced areas in cybersecurity. Basic courses include Introduction to IT & Cybersecurity and Computer Network Basics, while advanced courses include Advanced Persistent Threats, Malware Analysis, and Reverse Engineering.
- **Hands-on practice:** Platforms like [Hack The Box](https://www.hackthebox.com/) or [TryHackMe](https://tryhackme.com/) offer real-world scenarios for different types of software development to test and improve your skills.
- **Higher education:** Consider pursuing a master's degree or even a PhD if academia or high-end corporate research interests you.
### How to get your first job
- **Internships:** Many tech companies offer internships specifically focused on cybersecurity.
- **Entry-level positions:** Roles like security analyst or IT security consultant can be a great starting point.
### Career advancement pathway
- **Specialization:** As the field is vast, consider specializing in areas like [network security](https://www.cisco.com/c/en/us/products/security/what-is-network-security.html), [cloud security](https://www.ibm.com/topics/cloud-security), or even [forensics](https://intellipaat.com/blog/what-is-cyber-forensics/).
- **Management:** With experience, transitioning into advanced software development roles like security manager or chief information security officer (CISO) can be the next step.
## Systems Developer
Systems developers design and maintain complex computer systems, involving system integration, configuration, and optimization. They often focus on system-level software, such as operating systems and embedded systems.
### Prerequisites
- Familiarity with low-level programming languages like C or C++.
- An understanding of computer architectures, hardware-software interactions, and real-time systems.
### How to skill up
- **Courses:** Platforms like [MIT OpenCourseWare](https://ocw.mit.edu/) offer free courses on operating systems and system programming.
- **Projects:** Work on open source system-level projects. Raspberry Pi projects such as a [Pi-based home server](https://levelup.gitconnected.com/setting-up-a-raspberry-pi-home-server-ec7e11ee64e0) or [DIY VPN server](https://restoreprivacy.com/vpn/raspberry-pi/) can be an excellent starting point.
- **Engage on forums:** Platforms like [OSDev.org](https://wiki.osdev.org/Main_Page) provide community-driven guidance for aspiring systems developers.
### How to get your first job
- **Networking:** Given the systems developer’s scope is more niche than other software development team roles and responsibilities, networking at system development or embedded system conferences is invaluable.
- **Open source contribution:** Major operating systems, like Linux, are open source. [Contributing to open source projects](https://code.pieces.app/blog/a-beginners-guide-to-open-source-contribution-for-developers) can offer practical experience and visibility.
### Career advancement pathway
- **Systems architect:** Over time, once you have gained a significant amount of experience, you can move up to different roles in software development such as systems architect. In this role, you design complex system structures, oversee projects, and ensure the integration of multiple system functionalities.
- **Embedded systems specialist:** Embedded systems are everywhere, from appliances to medical devices. Specializing in this domain means working closely with hardware and writing efficient, compact, and robust software to run specific tasks.
## Researcher
Researchers are individuals who delve into specific tech domains, exploring new methodologies, tools, or solutions to push the boundaries of current technological capabilities. They push the boundaries of what's possible in the tech world, often in academia or corporate research labs.
### Prerequisites
- Strong foundational knowledge in your chosen area of interest.
- A master's or PhD is often necessary in research roles.
### How to skill up
- **Academic courses:** Consider deep dives into specialized courses related to your research interest. For instance, if your research interest is in quantum computing, you can take a course in quantum computing for computer scientists.
- **Collaborate:** Partner with existing researchers, join research groups, or engage in open source research projects.
- **Publish:** Start writing papers, even if independently, and submit them to journals or conferences.
### How to get your first job
- **Post-doctoral roles:** After a PhD, many researchers opt for postdoctoral roles that further specialize their skills.
- **Corporate labs:** Companies like Google, Microsoft, and IBM have their own research labs that often hire fresh PhD graduates.
### Career advancement pathway
- **Lead researcher:** As you accumulate experience and recognition, leading your own research projects or teams is a natural progression for these types of software developer jobs.
- **Consultancy:** Offer your expertise to businesses, startups, or governments that can benefit from cutting-edge research insights.
## DevOps Engineer
DevOps engineers are experts who bridge the gap between software development and operations. They automate and streamline integration, deployment, and infrastructure management processes, optimizing the overall software development process.
### Prerequisites
- Familiarity with tools like Docker, Jenkins, and Kubernetes.
- Proficiency in scripting languages like Python or Bash.
- Knowledge of cloud platforms like [AWS](https://aws.amazon.com/), [GCP](https://cloud.google.com/?hl=en), or [Azure](https://azure.microsoft.com/en-us).
### How to skill up
- **Courses:** Platforms like [Udacity](https://www.udacity.com/) or [A Cloud Guru](https://www.pluralsight.com/cloud-guru) offer comprehensive DevOps training.
- **Real-world practice:** Set up continuous integration and continuous deployment (CI/CD) pipelines for personal projects.
- **Join DevOps communities:** Engage in communities like [DevOpsDays](https://devopsdays.org/) or local DevOps meetups.
### How to get your first job
- **Internships:** Many tech companies, especially startups, look for DevOps interns as these software developer job types are becoming more critical.
- **Entry-level roles:** Titles like junior DevOps engineer or DevOps analyst can be ideal starting points.
### Career advancement pathway
- **Specialization:** Focus on specific areas like cloud automation, microservices, or infrastructure as code.
- **Management:** Transition into roles overseeing DevOps teams or strategy as you gain experience.
## Data Engineer
Data engineers are specialists who design, construct, install, and maintain large-scale data processing systems, such as databases and infrastructures. Unlike other types of software development methodologies, they ensure data availability for data analysts and scientists.
### Prerequisites
- Proficiency in languages like Python, SQL, and Java.
- Experience with big data tools like Hadoop, Spark, and Kafka.
### How to skill up
- **Courses:** Platforms like [DataCamp](https://www.datacamp.com/) and [edX](https://www.edx.org/) offer courses tailored to big data and data engineering.
- **Projects:** Build and scale databases or work on data processing tasks using platforms like [Amazon Redshift](https://aws.amazon.com/redshift/) or [Google BigQuery](https://cloud.google.com/bigquery).
### How to get your first job
- **Networking:** Connect with professionals at data-centric conferences like the [Strata Data & AI Conference](https://www.oreilly.com/conferences/strata-data-ai.html) or online forums like [Stack Overflow](https://stackoverflow.com/) or the [Data Engineering subreddit](https://www.reddit.com/r/dataengineering/).
- **Entry-level positions:** Companies dealing with large data sets often hire data engineers to streamline their data infrastructure.
### Career advancement pathway
- **Specialization:** Areas like real-time data processing, data lakes, or [extract, transform, load](https://aws.amazon.com/what-is/etl/) (ETL) processes offer avenues for deeper expertise.
- **Data architecture:** Moving towards designing the broader data strategy for organizations can be a rewarding next step.
## Test Engineer
Test engineers, often referred to as quality assurance (QA) engineers, ensure software quality by designing and executing tests, identifying bugs, and verifying that the product meets requirements.
### Prerequisites
- Understanding of software development lifecycles.
- Familiarity with different types of testing methodologies and tools like [Selenium](https://www.selenium.dev/), [JUnit](https://junit.org/junit5/), or [TestNG](https://testng.org/doc/).
### How to skill up
- **Courses:** Platforms like [Udemy](https://www.udemy.com/) or [Coursera](https://www.coursera.org/) have courses dedicated to software testing techniques.
- **Hands-on experience:** Regularly practice testing on open source projects or personal coding projects.
- **Engage on QA forums:** Websites like [Stack Exchange](https://sqa.stackexchange.com/) or [Ministry of Testing](https://www.ministryoftesting.com/) can be excellent resources.
### How to get your first job
- **Internships:** Many organizations offer QA internships, recognizing the importance of robust testing.
- **Networking:** Connect with QA professionals or attend software testing seminars to obtain leads to entry-level roles.
### Career advancement pathway
- **Specialization:** Delving into niches like performance testing, security testing, or automation can enhance your expertise.
- **Management:** As experience is accumulated, transitioning to roles overseeing testing teams or strategies is a common trajectory.
## Mobile Engineer
Mobile engineers are types of developers in software who specialize in creating applications tailored for mobile platforms like Android and iOS, focusing on user experience, performance, and platform-specific features.
### Prerequisites
- Proficiency in Kotlin or Java for Android and Swift or Objective-C for iOS.
- An understanding of mobile UI/UX design principles.
### How to skill up
- **Courses:** Platforms like [Udacity](https://www.udacity.com/) or [Treehouse](https://teamtreehouse.com/) offer specialized courses for Android and iOS development.
- **App development:** Regularly building and publishing apps will enhance your skills and portfolio.
- **Engage in mobile developer communities:** Participating in Android or iOS developer communities can offer valuable insights and connections.
### How to get your first job
- **App portfolio:** Showcasing apps you've developed on platforms like GitHub can impress potential employers.
- **Networking:** Attend mobile developer conferences, workshops, or local meetups.
### Career advancement pathway
- **Specialize in cross-platform development:** Types of software development tools like [Flutter or React Native](https://code.pieces.app/blog/flutter-vs-react-native) allow developers to build apps for both Android and iOS simultaneously.
- **Lead developer or architect roles:** As you gain expertise, steering the direction of mobile app projects or leading mobile dev teams becomes viable.
## Developer Advocate
A [developer advocate](https://code.pieces.app/blog/how-to-become-a-better-developer-advocate) is a technical evangelist who bridges the gap between developers and stakeholders. They promote and support technical products through community engagement, content creation, and feedback facilitation.
### Prerequisites
- Strong technical background combined with excellent communication skills.
- An established online presence, e.g., via blogs, social media, or GitHub.
### How to skill up
- **Engage with developer communities:** Try regular participation in forums like [Stack Overflow](https://stackoverflow.com/) or open source projects like [freeCodeCamp](https://contribute.freecodecamp.org/#/index), or writing or reading from technical blogs like [DEV](https://dev.to/).
- **Soft skills:** Work on presentation, public speaking, and community engagement skills.
### How to get your first job
- **Online portfolio:** Showcase your blogs, videos, or contributions that demonstrate both technical depth and communication prowess.
- **Networking:** Engage in tech conferences, workshops, or webinars, either as an attendee or presenter.
### Career advancement pathway
- **Specialization:** Focus on becoming an advocate for specific technologies or platforms that are rising in popularity, like [cloud computing](https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-cloud-computing).
- **Management:** With enough experience, move into roles where you manage teams of developer advocates or craft broader developer relations strategies.
## How Pieces Can Help You in Your Career Development
Most types of software development roles are not just about coding. It's also about gathering resources, optimizing your workflow, and ensuring you have a repository of knowledge that can be easily accessed and shared. Especially for junior developers or students venturing into the field, having a tool that streamlines this process can be invaluable.
Anyone who's tried to manage their coding resources knows the chaos. Multiple [notes apps](https://code.pieces.app/blog/stop-filling-your-note-taking-app-with-code-snippets), a plethora of browser bookmarks, scattered files, and an ever-growing list of browser tabs. You're likely to lose the thread of your research, or worse, miss out on that one essential piece of code you saw earlier.
[Pieces](https://docs.pieces.app/installation-getting-started/what-am-i-installing) lets you instantly save snippets of code and, over time, build a repository that speeds up your workflow. Furthermore, Pieces helps you stay organized, letting you focus on coding, learning, and collaborating.
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/9ganzry_6bd0270f8f55df05bf6a86b38b8820f2.jpg" alt="Saving code snippets."/><figcaption>[Image courtesy of Pieces](https://pieces.app/)</figcaption></figure>
### Conduct Easier Research in Multiple Languages through In-App Translation
Differences in coding languages shouldn't be a barrier. Whether you're a frontend developer diving into backend or a systems developer venturing into web development, Pieces helps bridge the gap. The transformation feature can convert code snippets from one language to another, making it easier to understand and integrate into your projects.
### Create Boilerplate Templates for Code Reuse during Interviews
Interviews can be nerve-racking, and having to rewrite code templates you've written countless times before just adds to the stress.
Pieces allows you to create templates to save your binary search template, that often-used data structure, or even a tricky algorithm. When you're in the thick of [coding interview prep](https://code.pieces.app/blog/code-snippets-coding-interview-prep), these snippets can be a lifesaver, allowing you to focus on optimization and problem-solving.
### Enrich Code Snippets for Better Understanding
Pieces is not only a place to [store code snippets](https://code.pieces.app/blog/how-to-store-code-snippets-and-10x-your-developer-productivity), but also amplifies the usability and comprehension of saved content. As junior developers and budding coders navigate through myriad languages, libraries, and frameworks, Pieces ensures that every saved snippet translates to enriched understanding and swift implementation.
When you save a snippet with Pieces, its AI generates a precise title and description for you. Instead of you manually tagging, Pieces smartly assigns relevant tags, refining your search process and improving [code organization](https://code.pieces.app/blog/modern-code-organization-techniques). For every snippet, you can add related links, like the original source, an explanatory tutorial, or other resources, turning each saved code into a mini-lesson.
### Simplify Research with Browser Plugins
With Pieces browser extensions, any useful code you find online is just a click away from joining your personal library. Along with the code, the page URL and essential details are stored, ensuring you never lose the context of a saved snippet.
### Enhance Code Readability in IDEs
Code isn't just about function; it's about form, too. With Pieces' seamless integration with your code, you can make sure what you write is functional and aesthetically pleasing. For example, with the Pieces for Developers VS Code extension, you can reuse code from your personal Pieces repository with auto-completion, save snippets throughout your work-in-progress journey, and even ask the [Pieces Copilot](https://code.pieces.app/blog/introducing-pieces-copilot) about any repository to help you understand, compliment, and generate relevant code for your project.
### Extract Code from Multimedia Content
Videos and images have become indispensable teaching tools, but typing code from paused videos or screenshots can be incredibly tedious. With Pieces, you can just take a screenshot, and it will extract the text for you so you're no longer staring at monotonous lines. Thanks to syntax highlighting, every code element pops, simplifying both reading and debugging. Pieces also formats and extracts the code so it's ready for you to reuse.
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/ojyulbo_bc3f802fcefeccaaf2765848c4285601.gif" alt="Optical code recognition."/><figcaption>[Image courtesy of Pieces](https://docs.pieces.app/product-highlights-and-benefits/saving-screenshots)</figcaption></figure>
### Future-Proof with Upcoming Features
Pieces also has some exciting features on the way, including collaborating in real time, sharing folders of snippets, scrollable feeds of relevant snippets, and cross-device sync. Soon, you'll be able to cluster related snippets, simplifying your review, study, or sharing process.
## Conclusion
All types of software development roles come with a unique set of challenges, rewards, and growth opportunities. Whether you're aiming to dominate the frontend, delve deep into backend intricacies, or explore the ever-evolving worlds of cybersecurity, DevOps, or data engineering, understanding the road ahead is crucial. Equipped with a clear roadmap, you can skill up effectively, seize the right job opportunities, and ascend the career ladder with purpose and determination.
However, as every seasoned coder knows, success in the software development world isn't just about the journey—it's about the tools you take with you. Beyond its function as a snippet manager, Pieces can streamline and enrich your workflow while also simplifying research and generating contextual code for your project. [Try Pieces today](https://docs.pieces.app/installation-getting-started/what-am-i-installing) to see how it's the perfect companion as you begin or change course in your software development journey. | get_pieces | |
1,639,103 | Build a Text Summarization app using Reflex (Pure Python) | Reflex is an open-source, full-stack Python framework that makes it easy to build and deploy web apps... | 0 | 2023-10-19T19:00:16 | https://dev.to/emmakodes_/build-a-text-summarization-app-using-reflex-pure-python-1a94 | reflex, python, openai, machinelearning | [Reflex](https://reflex.dev/) is an open-source, full-stack Python framework that makes it easy to build and deploy web apps in minutes. You have most of the features of a frontend library like Reactjs and a backend framework like Django in one with ease in development and deployment. All while developing in a single language **PYTHON**.
We will use Reflex to build a text summarization app where a user will be able to input a text and the Openai llm and langchain will generate a summary of the text.
The following will be the output of the app:

## Outline
- Get an OpenAI API Key
- Create a new folder, open it with a code editor
- Create a virtual environment and activate
- Install requirements
- reflex setup
- text_summarizer.py
- state.py
- style.py
- .gitignore
- run app
- conclusion
## Get an OpenAI API Key
First, get your own OpenAI API key:
- Go to https://platform.openai.com/account/api-keys.
- Click on the + Create new secret key button.
- Enter an identifier name (optional) and click on the Create secret key button.
- Copy the API key to be used in this tutorial

## Create a new folder and open it with a code editor
Create a new folder and name it `text_summarizer` then open it with a code editor like VS Code.
## Create a virtual environment and activate it
Open the terminal. Use the following command to create a virtual environment `.venv` and activate it:
```
python3 -m venv .venv
```
```
source .venv/bin/activate
```
## Install requirements
We will need to install `reflex` to build the app and also `openai tiktoken chromadb langchain` to generate the text summaries
Run the following command in the terminal:
```
pip install reflex==0.2.9 openai==0.28.1 tiktoken==0.5.1 chromadb langchain==0.0.316
```
## reflex setup
Now, we need to create the project using reflex. Run the following command to initialize the template app in `text_summarizer` directory.
```
reflex init
```
The above command will create the following file structure in `text_summarizer` directory:

You can run the app using the following command in your terminal to see a welcome page when you go to [http://localhost:3000/](http://localhost:3000/) in your browser
```
reflex run
```
## text_summarizer.py
We need to build the structure and interface of the app and add components. Go to the `text_summarizer` subdirectory and open the `text_summarizer.py` file. This is where we will add components to build the structure and interface of the app. Add the following code to it:
```
import reflex as rx
# import State and style
from text_summarizer.state import State
from text_summarizer import style
def full_text() -> rx.Component:
"""return a vertical component of heading and text_area."""
return rx.vstack(
rx.heading("Text Summarizer",style=style.topic_style),
rx.text_area(
value=State.large_text,
placeholder="Enter your full text here",
on_change=State.set_large_text,
style=style.textarea_style,
),
)
def openai_key_input() -> rx.Component:
"""return a password component"""
return rx.password(
value=State.openai_api_key,
placeholder="Enter your openai key",
on_change=State.set_openai_api_key,
style=style.openai_input_style,
)
def submit_button() -> rx.Component:
"""return a button."""
return rx.button(
"Summarize text",
on_click=State.start_process,
is_loading=State.is_loading,
loading_text=State.loading_text,
spinner_placement="start",
style=style.submit_button_style,
)
def summary_output() -> rx.Component:
"""return summary."""
return rx.box(
rx.text(State.summary, text_align="center"),
style=style.summary_style,
)
def index() -> rx.Component:
"""return a full_text, openai_key_input, submit_button, summary_output respectively."""
return rx.container(
full_text(),
openai_key_input(),
submit_button(),
summary_output(),
)
# Add state and page to the app.
app = rx.App(style=style.style)
app.add_page(index)
app.compile()
```
The above code will render a text and the text area input, password input to enter your openai key, a submit button, and a box to show the summary
## state.py
Create a new file `state.py` in the `text_summarizer` subdirectory and add the following code:
```
import reflex as rx
from langchain.chat_models import ChatOpenAI
from langchain.chains.summarize import load_summarize_chain
from langchain.docstore.document import Document
class State(rx.State):
# The current large text to be summarized.
large_text: str
# openai key
openai_api_key: str
# the result
summary: str
is_loading: bool = False
loading_text: str = ""
def start_process(self):
"""Set state variables and summarize method."""
self.is_loading = True
self.loading_text = "generating summary...."
return State.summarize
def summarize(self):
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k", streaming=True, openai_api_key=self.openai_api_key)
docs = [Document(page_content=t) for t in self.large_text]
# use load_summarize_chain to summarize the full text and return self.summary to frontend
chain = load_summarize_chain(llm, chain_type="stuff")
self.summary = chain.run(docs)
self.summary
yield
# reset state variable again
self.is_loading = False
self.loading_text = ""
```
The above code uses `load_summarize_chain` with chain_type of "stuff" to generate the summary and send it to the front end. When the user clicks the "Summarize text" button, it triggers and calls the `start_process` method. `start_process` method assigns the `is_loading` argument of "Summarize text" button to `True` so that the button can start spinning with the text "generating summary...." to show that the app is generating the summary. `start_process` method then calls `summarize` method to generate the summary with the help of openai llm "gpt-3.5-turbo-16k" and yield the result or summary to the frontend. It then changes the `is_loading` argument of the button to `False` and `loading_text` to an empty string so as to return the button to its initial state.
## style.py
Create a new file `style.py` in the `text_summarizer` subdirectory and add the following code. This will add styling to the page and components:
```
style = {
"background-color": "#454545",
"font_family": "Comic Sans MS",
"font_size": "16px",
}
topic_style = {
"color": "white",
"font_family": "Comic Sans MS",
"font_size": "3em",
"font_weight": "bold",
"box_shadow": "rgba(240, 46, 170, 0.4) 5px 5px, rgba(240, 46, 170, 0.3) 10px 10px",
"margin-bottom": "3rem",
}
textarea_style = {
"color": "white",
"width": "150%",
"height": "20em",
}
openai_input_style = {
"color": "white",
"margin-top": "2rem",
"margin-bottom": "1rem",
}
submit_button_style = {
"margin-left": "30%",
}
summary_style = {
"color": "white",
"margin-top": "2rem",
}
```
## .gitignore
You can add the .venv directory to the .gitignore file to get the following:
```
*.db
*.py[cod]
.web
__pycache__/
.venv/
```
## Run app
Run the following in the terminal to start the app:
```
reflex run
```
You should see an interface as follows when you go to http://localhost:3000/

You can input the text that you want to be summarized. Also, enter your openai API key and then click the button to get your summarized text.
## Conclusion
Reflex is awesome and a game-changer. You should try it out. You can get the code: [https://github.com/emmakodes/text_summarizer.git](https://github.com/emmakodes/text_summarizer.git)
| emmakodes_ |
1,639,250 | JavaScript: Promise | How to use JavaScript Promises in the industry :- Ecommerce: Promises are a way to... | 0 | 2023-10-19T01:40:03 | https://dev.to/lakharashubham007/javascript-promise-4dj5 | javascript, interview | ## How to use JavaScript Promises in the industry :-
**Ecommerce:**
> Promises are a way to handle asynchronous operations more cleanly and manage errors effectively.
- E-commerce platforms often rely on external APIs for various functionalities like product catalog, payment processing, or shipping calculations. When making API requests, use the fetch API or a library like Axios that returns promises.
```
fetch('https://api.example.com/products')
.then(response => response.json())
.then(data => {
// Handle the API response data
})
.catch(error => {
// Handle errors
});
```
- Promises are a fundamental concept in asynchronous programming in JavaScript.
- They provide a way to handle asynchronous operations more cleanly and manage the flow of asynchronous code.
****The core concepts of promises include:
1. **States**:
- Promises have three possible states:
- **Pending**: The initial state when a promise is created, representing that the operation is ongoing or hasn't completed yet.
- **Fulfilled**: The state when the asynchronous operation is successfully completed, and a result value is available.
- **Rejected**: The state when the asynchronous operation encounters an error or fails, and an error reason is provided.
2. **Callbacks**:
- Promises use two callback functions to handle their states:
- **resolve()**: A function called when the promise is successfully fulfilled. It passes a result value to any attached `.then()` handlers.
- **reject()**: A function called when the promise is rejected due to an error. It passes an error reason to any attached `.catch()` handlers.
3. **Chaining**:
- Promises allow you to chain multiple asynchronous operations together using `.then()`. This enables you to specify a sequence of tasks to execute when the promise is fulfilled.
```javascript
someAsyncFunction()
.then(result => {
// Handle the result
return anotherAsyncFunction(result);
})
.then(finalResult => {
// Handle the final result
})
.catch(error => {
// Handle errors from any of the steps
});
```
4. **Error Handling**:
- Promises provide a centralized way to handle errors using `.catch()`. Errors thrown in any `.then()` block will propagate down to the nearest `.catch()` handler.
```javascript
someAsyncFunction()
.then(result => {
// Handle the result
throw new Error('An error occurred');
})
.catch(error => {
// Handle the error
});
```
5. **Promise.all()**:
- You can use `Promise.all()` to wait for multiple promises to fulfill in parallel. It resolves when all the provided promises have resolved, providing an array of their results.
```javascript
const promises = [promise1, promise2, promise3];
Promise.all(promises)
.then(results => {
// Handle the results
})
.catch(error => {
// Handle errors from any of the promises
});
```
6. **Async/Await** (ES6 and later):
- ES6 introduced the `async/await` syntax, which simplifies working with promises, making asynchronous code look more like synchronous code.
```javascript
async function fetchData() {
try {
const result = await someAsyncFunction();
// Handle the result
} catch (error) {
// Handle errors
}
}
```
Promises are a crucial part of modern JavaScript and are widely used in web development for handling asynchronous operations such as AJAX requests, file I/O, and more. They promote cleaner, more maintainable code by providing a structured way to deal with asynchronous behavior and errors.
| lakharashubham007 |
1,639,361 | How to Get Your First Data Engineer Job? | Are you curious about data, numbers, and technology? Do you dream of working as a data engineer but... | 0 | 2023-10-19T05:05:05 | https://dev.to/aqsa81/how-to-get-your-first-data-engineer-job-4ifn | dataengineering, datascience, bigdata, sql | Are you curious about data, numbers, and technology? Do you dream of working as a data engineer but don't know where to start? You've come to the right place! In this guide, I will show you how to land an entry-level data engineer job step by step, using straightforward language and actionable tips.
## **Understanding the Role of a Data Engineer**
To begin, let's get a clear picture of what a data engineer does. Data engineers are the people responsible for making sure that data is organized and ready for data scientists and analysts to use. Here are their main tasks:
- **Collecting Data:** They gather and store data from different places.
- **Changing Data:** Data engineers clean up and organize data.
- **Building Data Systems:** They create systems to move data around efficiently.
- **Managing Databases:** They take care of databases and make sure data is in good shape.
- **Data Warehouses:** Data engineers create and look after places where data is stored.
- **Data Rules:** They make sure data is safe and follows the rules.
Now that you know what a data engineer does, let's move on to how to become one.
_**Check-> [12 Best+FREE Data Engineering Courses Online & Certifications](https://www.mltut.com/best-data-engineering-courses-online/)**_
## **Acquiring the Necessary Skills**
To become a data engineer, you need to learn specific skills. Here are the skills you need:
### **1. Understand the Basics**
Before you get into the technical stuff, it's crucial to be good at:
- **Math:** You should be comfortable with math, like algebra and statistics.
- **Computer Basics:** Know the basic ideas of how computers work, like how they sort information.
Having a strong foundation in these will help you understand data engineering better.
### **2. Be Good at Programming**
Programming is super important for data engineering. Focus on:
- **Python:** It's a widely used language in data engineering.
- **Java:** This one's important if you're dealing with really big data.
- **SQL:** You'll need this to work with databases.
### **3. Know About Databases**
You must learn a lot about databases. Focus on:
- **Regular Databases:** Learn how to use SQL databases like MySQL or PostgreSQL.
- **NoSQL Databases:** Explore databases like MongoDB, Cassandra, or Redis.
- **Data Design:** Know how to make databases work well.
### **4. Data Tools**
To work with data, you need to learn about these tools:
- **Apache Spark:** It's a great tool for working with a lot of data.
- **Apache Kafka:** This is for dealing with data that comes in really quickly.
- **Hadoop:** It's important for handling big data.
### **5. Learn Cloud Services**
Cloud services help you work with data on the internet. You can start with:
- **AWS (Amazon Web Services):** It's one of the biggest cloud services.
- **Azure:** Microsoft's cloud service, also very popular.
- **Google Cloud:** Google's cloud service, known for data tools.
Mastering these skills will make you a strong data engineer candidate.
## **Building a Strong Educational Foundation**
A good education can be your springboard to becoming a data engineer. Here's how to get started:
- **High School:** Start by taking math and computer science classes.
- **College Degree:** Pursue a degree in computer science, information technology, or a related field.
- **Online Courses:** Many websites offer online courses, such as Coursera, edX, and Udacity. These are great for learning data engineering skills.
- **Certifications:** Consider getting certifications in data engineering or related areas.
A strong education lays the groundwork for your data engineering journey.
_**Check-> [12 Best+FREE Data Engineering Courses Online & Certifications](https://www.mltut.com/best-data-engineering-courses-online/)**_
## **Gaining Practical Experience**
It's not just about what you know; it's also about what you can do. Gaining hands-on experience is crucial for landing your first data engineering job.
### **1. Internships**
Internships are an excellent way to get practical experience while you're still in school or right after graduating. Look for data engineering internships in your area or consider relocating for a great opportunity.
### **2. Personal Projects**
Create your own data projects. You can start by collecting and analyzing data from something that interests you. Document your process and showcase your projects in a portfolio. This will impress potential employers.
### **3. Open Source Contributions**
Contribute to open source data projects. Many organizations are looking for help from the community to improve their data tools. By contributing, you gain valuable experience and make connections in the field.
### **4. Online Courses**
Online courses often come with practical exercises. Completing these projects demonstrates your skills and commitment to potential employers.
## **Creating a Standout Resume**
Your resume is your first impression on potential employers. Make it shine by:
- **Highlighting Skills:** Emphasize your skills, especially those relevant to data engineering.
- **Detailing Projects:** Mention personal projects, open-source contributions, and any internships.
- **Including Certifications:** If you have relevant certifications, don't forget to mention them.
- **Tailoring for the Job:** Customize your resume for each job application to match the specific requirements.
## **Networking and Making Connections**
Networking can open doors to job opportunities. Here's how to get started:
- **LinkedIn:** Create a professional LinkedIn profile. Connect with people in the data engineering field, including recruiters, professors, and professionals.
- **Conferences and Meetups:** Attend industry events, conferences, and local meetups to meet people and learn from experts.
- **Online Communities:** Participate in online data engineering forums and communities. Engaging in discussions can help you make connections.
## **Preparing for Interviews**
Getting invited to an interview is a big step. Here's how to prepare for the two main types of interviews you may encounter:
### **1. Behavioral Interviews**
These interviews focus on your past experiences and how you handle situations. Be ready to discuss:
- **Teamwork:** Talk about projects where you worked well with others.
- **Problem Solving:** Describe challenges you've faced and how you overcame them.
- **Adaptability:** Explain how you've learned from your experiences.
### **2. Technical Interviews**
In these interviews, you'll be tested on your technical skills. Be prepared for questions about:
- **Algorithms:** Expect algorithm problems that test your problem-solving abilities.
- **Coding:** Be ready to write code on a whiteboard or using an online platform.
- **Data Knowledge:** You may be asked about data structures and databases.
## **Applying for Entry-Level Data Engineer Jobs**
When applying for jobs, remember to:
- **Search Widely:** Look for data engineering positions on job websites, company career pages, and LinkedIn.
- **Tailor Your Applications:** Customize your cover letter and resume for each application.
- **Follow Instructions:** Pay close attention to the job posting's instructions for applying.
- **Apply Early:** Apply as soon as you find a job opening to increase your chances.
## **Nailing the Interview**
During the interview, remember to:
- **Research the Company:** Understand the company's mission, culture, and products.
- **Prepare Questions:** Have thoughtful questions ready to ask the interviewer.
- **Be Confident:** Confidence and a positive attitude can make a great impression.
- **Follow Up:** Send a thank-you email after the interview to express your continued interest.
## **The Final Steps**
Once you land your first data engineer job, you're not done. Keep learning
and growing:
- **On-the-Job Learning:** Learn from your colleagues and keep improving your skills.
- **Certifications:** Consider pursuing advanced certifications.
- **Mentorship:** Find a mentor who can guide your career.
_**Check-> [Data Engineering Career Path: Step by Step Complete Roadmap](https://www.mltut.com/data-engineering-career-path/)**_
## **Conclusion**
Landing your first entry-level data engineer job is an exciting journey. It takes education, practical experience, and networking. Stay committed, keep learning, and you'll be well on your way to achieving your dream of working as a data engineer. Good luck on your path to success! | aqsa81 |
1,639,530 | Why We Call Saving the Magical Feature | Recently, the entire team at Welltested AI engaged in a discussion to understand what according to us... | 0 | 2023-10-19T08:42:23 | https://dev.to/welltestedai/why-we-call-saving-the-magical-feature-183o | welltested, testing, flutter, ai | Recently, the entire team at Welltested AI engaged in a discussion to understand what according to us is the most notable feature of Welltested at the moment. And, everyone agreed upon `welltested save unit` which we also call the magic command :)
While we internally love it, quick analytics into our database showed us that most of our users are not yet using it! This led to the question: are we talking about it enough? haha.
So, we decided to write this blog to share with our users what is so special about Welltested's **save** feature and how to use it effectively. Let's dive into it.
### Setup Your Project
For starters, download the [flutter twitter clone](https://github.com/TheAlphamerc/flutter_twitter_clone) project that we will be using. You can also use your project or download any other open-source project of your choice to follow along. The steps that we will follow will mostly remain the same.
Next, open the project in the IDE. After opening the project, we will need to head to [Welltested](https://pub.dev/packages/welltested) on pub.dev. Then, follow the instructions available on the page to set up Welltested. Since the setup process is fairly straightforward and well-documented, you will not face any issues following along. The general steps include:
1. Getting the API key and adding it to the project
2. Adding the Welltested packages
3. Activating Welltested CLI in your terminal
Once done we are ready to generate and save the tests :)
### Generate Unit Tests
To generate the tests. We'll need to add the `@Welltested` annotation to the methods or classes for which we need to generate tests. This will help Welltested pick methods for whom test generation is requested.
Open **lib/helper/utility.dart** file in the project. We will find `Utility` class with a bunch of static methods declared inside it:

For this blog, we will generate tests for the methods present in the Utility class only. Add the Welltested annotation at the top of the `Utility` class:
```dart
@Welltested()
```
This annotation above the `Utility` class enables Welltested to know that we're interested in generating tests for the methods defined in this class.
Next, we will generate the tests by running the following chain of commands in the terminal:
```bash
welltested generate unit -m getPostTime2 &&
welltested generate unit -m getDob &&
welltested generate unit -m getJoiningDate &&
welltested generate unit -m getUserName
```
Upon execution of the above commands, Welltested will start generating the tests for the methods mentioned after the `-m` parameter that have the `@welltested` annotation too - either directly or on the class containing the methods.
Open the generated tests located in the **tests/helper/utility** directory after the execution of the generate commands is finished. Take a look at the generated files. Congratulations! You have successfully generated tests for your code in less than 5 minutes. Isn't it great :)
Looking at the generated tests. You will see that some of the tests have errors:

But don't worry, Welltested has got you covered here. And this is where the Welltested's **save** command shines 😎
### Welltested's Save Command
Welltested AI has the ability to constantly learn and improve as you interact with it. Since this is our first interaction with the AI, it may make mistakes in the initial tests as it hasn't yet learned about your project and your coding style. However, fixing this is simple.
You can just simply fix the broken generated tests and save the tests using Welltested's `save` command. Welltested will automatically learn from them and provide much smarter and error-free test cases in the future.
To understand this in more detail. Let's look at an example by updating and saving one of the earlier generated unit tests. Afterwards, we will use Welltested's `generate` command to generate the remaining tests again.
Open **test/helper/utility/getPostTime2.welltested_test.dart** file and update it with the following:
```dart
import 'package:flutter_test/flutter_test.dart';
import 'package:flutter_twitter_clone/helper/utility.dart';
void main() {
group('Utility getPostTime2', () {
// 1.
// Older version
// test('When date is null, should return empty string', () {
// final mockUtility = MockUtility();
// when(mockUtility.getPostTime2(null)).thenReturn('');
// expect(mockUtility.getPostTime2(null), '');
// });
test('When date is null, should return empty string', () {
expect(Utility.getPostTime2(null), equals(''));
});
test('When date is empty, should return empty string', () {
expect(Utility.getPostTime2(''), equals(''));
});
test('When date is not in correct format, should throw FormatException',
() {
expect(() => Utility.getPostTime2('invalid date'), throwsFormatException);
});
test('When date is in correct format, should return formatted date', () {
String date = '2022-12-01T20:18:04';
String expectedDate = '08:18 PM - 01 Dec 22';
expect(Utility.getPostTime2(date), equals(expectedDate));
});
// 2.
// Older version
// test('When date is in correct format but in different timezone, should return formatted date in local timezone', () {
// final mockUtility = MockUtility();
// String date = '2022-12-01T13:18:04Z'; // This is 20:18 in GMT+7
// String expectedDate = '08:18 PM - 01 Dec 22';
// when(mockUtility.getPostTime2(date)).thenReturn(expectedDate);
// expect(mockUtility.getPostTime2(date), equals(expectedDate));
// });
test(
'When date is in correct format but in different timezone, should return formatted date in local timezone',
() {
String date = '2022-12-01T14:48:04Z'; // This is 20:18 in IST
String expectedDate = '08:18 PM - 01 Dec 22';
expect(Utility.getPostTime2(date), equals(expectedDate));
});
});
}
```
On comparing the updated code with the previously generated test code. You will find two major differences here:
1. The generated code utilizes `MockUtility`, which is a mock version of the `Utility` class. However, the updated code directly employs the `Utility` class. This is because our objective is to test the methods that are encapsulated within this class.
2. The final test case aims to verify that the `getPostTime2` function accurately converts a date from a different time zone into the local time zone. Initially, the AI assumed GMT+7 was the local time zone, but this led to a failure because my actual local time zone is IST. Consequently, the code was adjusted to employ a date that, when processed by `getPostTime2`, would align with the date stored in the expectedDate variable for Indian Standard Time (IST).
Now, since we have only updated one test, we will delete all other generated tests. This is because when we execute the `save` test command, Welltested saves all the tests it has generated and that are present in the test directory. We want to ensure that our AI doesn't learn from code with errors. Don't you agree?
After deleting all the test files except **/getPostTime2.welltested_test.dart**, run the following command in the terminal:
```bash
welltested save unit
```
After a successful save, you'll receive the following message in the terminal:
```bash
✅ Succeeded. 1 were saved.
```
Now, add the following command on the terminal to generate tests again:
```bash
welltested generate unit -m getDob &&
welltested generate unit -m getJoiningDate &&
welltested generate unit -m getUserName
```
After the test generation is completed. Revisit the generated tests:

This time, the tests adhere to our coding style. Also, the generated code correctly uses the local time zone for my case based on the saved code which is fantastic. Don't you agree? I certainly do.
### Conclusion
By now, I hope you can appreciate the power of saving tests in Welltested and how it can assist you in crafting more efficient tests tailored to your specific needs and coding style.
I encourage you to generate tests using Welltested and to save all of them. This will not only speed up the test generation process but also enhance the quality of your tests over time. Happy Testing :) | yogesh009 |
1,639,548 | Industrial Ethylene Oxide Sterilizer-Lodha International | The Industrial Ethylene Oxide (EO) Sterilizer offered by Lodha International is a state-of-the-art... | 0 | 2023-10-19T09:18:27 | https://dev.to/lodhapharma/industrial-ethylene-oxide-sterilizer-lodha-international-119n | The Industrial Ethylene Oxide (EO) Sterilizer offered by Lodha International is a state-of-the-art sterilization system capable of effectively sterilizing various types of medical devices and equipment. which has excellent penetrating properties, to ensure thorough and efficient sterilization of even the most intricate and sensitive instruments.for more information visit our website:-https://www.lodhapharma.com/industrial-ethylene-oxide-sterilizer.php | lodhapharma | |
1,639,781 | Deleting a Column in SQL: Everything You Need to Know | Let’s learn what happens when you delete a column in SQL and how to do it in the most popular DBMS... | 21,681 | 2023-10-19T13:10:22 | https://www.dbvis.com/thetable/deleting-a-column-in-sql/ | delete, sql | **Let’s learn what happens when you delete a column in SQL and how to do it in the most popular DBMS technologies.**
---
Tools used in this tutorial
[DbVisualizer](https://www.dbvis.com/), top rated database management tool and SQL client.
---
Deleting a column in SQL is one of the most common operations when dealing with a database. For this reason, knowing how to properly delete a column in SQL is critical. Without the right procedures and precautions, you could run into data integrity and data loss issues.
In this guide, you will dig into the process of deleting columns in SQL databases, seeing the essential concepts and syntax. We will provide you with all the information you need to perform column deletion operations smoothly and effectively.
Let’s dive in!
## What Does it Mean to Delete a Column in SQL?
In SQL, deleting a column refers to the process of permanently removing a specific column from a table. All data associated with the column will be removed from the disk. Similarly, the column’s metadata will be removed from the table's schema. When you delete a column, you essentially eliminate its existence within the table structure. The column will be no longer available for queries and its data will be lost.
The SQL command to delete an existing column from a table is `DROP COLUMN`. This is part of the SQL DDL ([Data Definition Language](https://en.wikipedia.org/wiki/Data_definition_language)), which contains statements to define, modify, or delete the structure of database objects like tables, indexes, and constraints. Since `DROP COLUMN` involves modifying the structure of an existing table, it must be used in an `ALTER TABLE` query.
Here is the generic SQL syntax to delete a column:
```
1 ALTER TABLE <table_name>
2 DROP COLUMN <column_name>;
```
Replace `table_name` with the name of the table you want to remove the `column_name` column from. Later in this article, you will see how to use this query in the most popular relational databases, such as MySQL, PostgreSQL, Microsoft SQL Server, and Oracle Database.
## Consequences of Deleting a Column in SQL
Here are the key aspects to consider when removing a column from a table in SQL:
- **Data removal:** All data currently stored in the column to be deleted, in each row of the table, will be permanently deleted. This includes all values, records, or information previously stored in that column.
- **Metadata removal:** The column's metadata, such as its name, data type, default values, constraints, and indexes will be removed from the table's schema definition. This modification impacts the structure of the table, changing the number of its columns.
- **Query impact:** Any queries, statements, or [SQL procedures](https://www.dbvis.com/thetable/stored-procedures-in-sql-a-complete-tutorial/) that reference the deleted column must be updated accordingly. In detail, you need to exclude the column from the query to avoid errors or unexpected results.
- **Data integrity:** If the column is part of foreign keys or other constraints, deleting it involves data integrity considerations. In some DBMSs, you may need to remove these constraints or manage them accordingly before you can delete the column.
- **Performance improvements:** Deleting a column in SQL can lead to improved query performance, especially if the column is not frequently used or contains redundant information. This also results in reduced storage.
Keep in mind that deleting a column is an operation that cannot be undone. This means that you must be careful with it, especially in production environments. Before proceeding with the irreversible operation, it is critical to plan properly, back up the database, and consider the potential impact on existing applications and queries.
## How to Delete a Column in SQL
In this section, you will see how to delete one or more columns in the most popular relation databases.
### Deleting a column in MySQL
The syntax to [delete a single column in MySQL](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html#alter-table-add-drop-column) is:
```
1 ALTER TABLE <table_name>
2 DROP [COLUMN] <column_name>;
```
Note that the COLUMN keyword is optional.
For example, you can remove the `email` column from the `users` table with:
```
1 ALTER TABLE users
2 DROP COLUMN email;
```
The syntax to delete multiple columns with a single query in MySQL is:
```
1 ALTER TABLE <table_name>
2 DROP [COLUMN] <column1>,
3 DROP [COLUMN] <column2>,
4 ...,
5 DROP [COLUMN] <columnN>;
```
Again, `COLUMN` is optional.
So, the query below removes the `phone` and `address` columns from the `contacts` table:
```
1 ALTER TABLE contacts
2 DROP phone,
3 DROP address;
```
### Deleting a column in PostgreSQL
The syntax to [delete a single column in PostgreSQL](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html#alter-table-add-drop-column) is:
```
1 ALTER TABLE <table_name>
2 DROP [COLUMN] <column_name>;
```
The `COLUMN` keyword is optional.
For example, you can remove the `address` column from the `users` table with:
```
1 ALTER TABLE users
2 DROP COLUMN address;
```
The syntax to delete more than one column with a single PostgreSQL query is:
```
1 ALTER TABLE <table_name>
2 DROP [COLUMN] <column1>,
3 DROP [COLUMN] <column2>,
4 ...,
5 DROP [COLUMN] <columnN>;
```
Again, `COLUMN` is optional.
So, the following query deletes the `price` and `quantity` columns from the orders table:
```
1 ALTER TABLE orders
2 DROP COLUMN price,
3 DROP COLUMN quantity;
```
### Deleting a column in Microsoft SQL Server
The syntax to [delete a single column in Microsoft SQL Server](https://learn.microsoft.com/en-us/sql/relational-databases/tables/delete-columns-from-a-table?view=sql-server-ver16) is:
```
1 ALTER TABLE <table_name>
2 DROP COLUMN <column_name>;
```
For example, you can drop the release_date column from the games table with:
```
1 ALTER TABLE games
2 DROP COLUMN release_date;
```
The syntax to delete multiple columns with a single T-SQL query is:
```
1 ALTER TABLE <table_name>
2 DROP COLUMN <column1>, <column2>, ..., <columnN>;
```
Thus, this query removes the `time` and `efforts` columns from the projects table:
```
1 ALTER TABLE projects
2 DROP COLUMN time, effort;
```
### Deleting a column in Oracle Database
The syntax to [delete a single column in Oracle Database](https://oracle-base.com/articles/8i/dropping-columns) is:
```
1 ALTER TABLE <table_name>
2 DROP COLUMN <column_name>;
```
For example, you can remove the `price` column from the `products` table with:
```
1 ALTER TABLE products
2 DROP COLUMN price;
```
The syntax to delete multiple columns with a single T-SQL query is:
```
1 ALTER TABLE <table_name>
2 DROP (<column1>, <column2>, ..., <columnN>);
```
Then, the query below deletes the `address` and `age` from the `users` table:
```
1 ALTER TABLE projects
2 DROP (address, age);
```
Note that Oracle also allows you to logically delete a column with the [SET UNUSED](https://oracle-base.com/articles/8i/dropping-columns#LogicalDelete) command.
## Column Removal in SQL: Complete Example
When deleting a column in SQL, it is critical to use the right tool. You cannot just run a `DROP COLUMN` statement from the command line lightheartedly. The reason is that you need to understand what happened as a result of the query, and the best way to do this is with a database client. This tool allows you to visually explore tables and their data, helping you monitor your databases and fix issues as they arise. That is exactly what a top-notch database client like [DbVisualizer](https://www.dbvis.com/) offers!
[Download DbVisualizer for free](https://www.dbvis.com/download/) and follow the wizard to set up a database connection. Suppose you have an `account` MySQL database. Navigate to the `users` table and open the “Columns” dropdown.
Right-click on “Columns” and select the “Open in New Tab” option:
<br />

<figure><figcaption>The "Columns" tab in DbVisualizer.</figcaption></figure>
<br />
Here, you can visually explore all columns a table consists of.
Now, open the SQL commander and launch the query below to remove the `is_active` column:
```
1 ALTER TABLE users
2 DROP COLUMN is_active;
```
<br />

<figure><figcaption>Launching the DROP COLUMN operation in DbVisualizer.</figcaption></figure>
<br />
Right-click on “Columns” and select “Refresh Objects Tree:”
<br />

<figure><figcaption>Refreshing the database objects tree.</figcaption></figure>
<br />
This will refresh the column list, and you should notice that `is_active` is now gone:
<br />

<figure><figcaption>The "is_active" column is no longer in the list.</figcaption></figure>
<br />
Et voilà! You just deleted a column and see the results of the operation with a few simple steps!
## Conclusion
In this article, you saw what it means to delete a column in SQL, what happens when you do it, and how to do it. Thanks to what you learned here, you now know how to handle single and multiple column deletion in MySQL, PostgreSQL, Microsoft SQL Server, and Oracle Database.
Since deleting columns in SQL alters the table schema, monitoring the consequences in a database client makes everything easier. This is where DbVisualizer comes in! In addition to supporting visual querying and data exploration in dozens of databases, this tool also offers advanced query optimization capabilities, and ER schema exploration. [Download DbVisualizer for free now!](https://www.dbvis.com/download/)
## FAQ
### What is the difference between dropping a column and deleting a column in SQL?
In SQL, dropping a column and deleting a column mean the same things. The two expressions refer to the process of permanently removing a column from a table, including its data and metadata.
### What is the difference between deleting a column logically and deleting a column physically?
The distinction between logical and physical deletion in a relational database lies in how data removal is handled. Logical deletion involves marking the data as inactive without physically removing the column. On the other hand, physical deletion involves permanently removing the data from the database, making it unrecoverable.
### Is it possible to delete multiple columns in a single SQL statement?
In most SQL dialects, you can remove more than one column with a single `ALTER TABLE` query by using a special syntax. In such a scenario, you have to specify all the columns you want to delete in a list or use the `DROP COLUMN` command several times.
### What happens when you try to delete a column that does not exist?
When attempting to delete a column that does not exist in the table, the database will raise an error. For example, MySQL returns the following error message:
Can't DROP ''; check that column/key exists
Thus, ensure that the columns you want to delete exist in the table before launching the command.
### What are the best practices to consider when deleting a column from a production database?
Best practices help minimize data loss and disruption when removing a column from a production database. These include:
- Generating database backups.
- Evaluating relationships between tables, such as foreign keys or constraints.
- Notifying stakeholders and affected teams before proceeding.
- Performing comprehensive testing in a non-production environment before applying changes to the live database.
- Devising a rollback plan in case of unexpected issues.
## About the author
Antonello Zanini is a software engineer, and often refers to himself as a technology bishop. His mission is to spread knowledge through writing.
| dbvismarketing |
1,639,836 | An Easier Way: Chat to Deploy Llama2 with Walrus | In the previous blog, we explored how to deploy Llama2 on AWS with Walrus. In this blog, we will... | 25,061 | 2023-10-19T14:14:05 | https://www.seal.io/resource/blog/easier-way-to-deploy | devops, tutorial, llm, aws | In [the previous blog](https://dev.to/seal-io/how-to-deploy-llama2-on-aws-with-walrus-in-minutes-1n13), we explored how to deploy Llama2 on AWS with Walrus. In this blog, we will introduce an AI tool, [Appilot](https://github.com/seal-io/appilot), to simplify the deployment.
Appilot ['æpaɪlət] stands for application-pilot. It is an experimental project that helps you operate applications using GPT-like LLMs. Appilot empowers users to execute tasks such as application management, environment management, diagnose, and hybrid infrastructure orchestration seamlessly through natural language commands.
## Prerequisites
- Get OpenAI API key with access to the gpt-4 model.
- Install `python3` and `make`.
- Install [kubectl](https://kubernetes.io/docs/tasks/tools/) and [helm](https://helm.sh/docs/intro/install/).
- Have a running Kubernetes cluster.
## Install Appilot
- Clone the repository.
```
git clone https://github.com/seal-io/appilot && cd appilot
```
- Run the following command to get the envfile.
```
cp .env.example .env
```
- Edit the `.env` file and fill in `OPENAI_API_KEY`.
- Run the following command to install. It will create a venv and install required dependencies.
```
make install
```
## Using Walrus Backend
Walrus serves as the application management engine. It provides features like hybrid infrastructure support, environment management, etc. To enable Walrus backend, you need to [install Walrus](https://seal-io.github.io/docs/quickstart) firstly and edit the envfile:
1.Set `TOOLKITS=walrus`
2.Fill in `OPENAI_API_KEY`, `WALRUS_URL` and `WALRUS_API_KEY`
Below is more information about configuration, Appilot is configurable via environment variable or the envfile:

Then you can run Appilot to get started:
```
make run
```
## Chat to Deploy Llama2 on AWS
{% cta https://github.com/seal-io/appilot#demo %}Click to watch Demo(1min) {% endcta %}
## Conclusion
In this blog, we've embarked on a journey to explore the deployment of Llama2, using Appilot and Walrus. We've witnessed how these powerful tools can simplify the complex process of deployment.
In conclusion, as demonstrated in this blog, both Appilot and Walrus stand as indispensable allies in the face of the intricate challenges of application deployment. What's even more enticing is that both these tools are open-source, inviting you to download and give them a try!
- Walrus: https://github.com/seal-io/walrus
- Appilot: https://github.com/seal-io/appilot
| seal-io |
1,640,027 | Install Azure CLI using Powershell on Windows | If you want to interact with Microsoft Azure resources from your machine, you need an Azure CLI... | 26,166 | 2023-10-19T17:11:51 | https://dev.to/jasper475/install-azure-cli-using-powershell-on-windows-c | If you want to interact with Microsoft Azure resources from your machine, you need an Azure CLI installed.
**Pre-Requisites: **
1. Host Laptop (Windows, Mac, Linux)
2. [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) Installation documentation.
In this tutorial, I will share specific steps for installing Azure CLI using PowerShell.
## Step 1: Installation using Powershell
Open PowerShell and copy paste the following commands
- For Windows 32 Bit
--
`
$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; Remove-Item .\AzureCLI.msi`
- For Windows 64 Bit
--
`$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://aka.ms/installazurecliwindowsx64 -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; Remove-Item .\AzureCLI.msi`
## Step 2: Verify Installation
you can enter in powershell by typing 'az --version'
- `az --version`
```
PS C:\WINDOWS\system32> az --version
azure-cli 2.53.0
core 2.53.0
telemetry 1.1.0
Dependencies:
msal 1.24.0b2
azure-mgmt-resource 23.1.0b2
Python location 'C:\Program Files\Microsoft SDKs\Azure\CLI2\python.exe'
```
Once you see this post - that's it. you are able to install Azure CLI successfully.
## Step 3: Update the Azure CLI Uninstall
- type - `az upgrade`
```
PS C:\WINDOWS\system32> az upgrade
This command is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
You already have the latest azure-cli version: 2.53.0
Upgrade finished.You can enable auto-upgrade with 'az config set auto-upgrade.enable=yes'. More details in https://docs.microsoft.com/cli/azure/update-azure-cli#automatic-update
```
## Step 4: Uninstall Azure CLI
Platform Instructions
- Windows 11 Start > Settings > Apps > Installed apps
- Windows 10 Start > Settings > System > Apps & Features
- Windows 8 and Windows 7 Start > Control Panel > Programs > Uninstall a program
| jasper475 | |
1,640,257 | Day 836 : Worst Comes To Worst | liner notes: Professional : Got to demo an application I want to use in upcoming talks and... | 0 | 2023-10-19T21:55:14 | https://dev.to/dwane/day-836-worst-comes-to-worst-594h | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Got to demo an application I want to use in upcoming talks and workshops. Looked like it worked for everyone except with a person on a iPhone using Chrome. I'll have to investigate more. I responded to some community questions. I even went through some tutorials and updated the code snippets and wording. Not a bad day.
- Personal : Last night, I went through some tracks for the radio show. I built the test application I got to demo today. That's all I think I did. I really need to start using https://whatyoudo.in again so I can look back and remember.

Going to pick up some albums on Bandcamp and put together the social media posts ready. I started pulling the clothes I want to take on my trip and separating them for what will go in the backpack and what will go in the suitcase. Oh and the suitcase's tracking said it's supposed to come Saturday instead of today. For like 2 days, there was no updates to the website. I just hope it gets here in time. Worst comes to worst, I have a back up plan. Aside from the suitcase, I think all the stuff I ordered should be coming in today. I'm going to try and start ironing clothes. Going to eat dinner and get started.
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube sevZEOUXpw4 %} | dwane |
1,640,269 | Medical Grade Protective Mask Filter Material Market Size, Type, segmentation, growth and forecast 2023-2030 | Medical Grade Protective Mask Filter Material Market The Medical Grade Protective Mask Filter... | 0 | 2023-10-19T22:23:56 | https://dev.to/chiragreportprime1/medical-grade-protective-mask-filter-material-market-size-type-segmentation-growth-and-forecast-2023-2030-17gf | business, marketing, growth, marketresearch |

Medical Grade Protective Mask Filter Material Market
The Medical Grade Protective Mask Filter Material Market is expected to grow from USD 2.80 Billion in 2022 to USD 3.70 Billion by 2030, at a CAGR of 3.30% during the forecast period.
Get the Sample Report: https://www.reportprime.com/enquiry/sample-report/11368
Medical Grade Protective Mask Filter Material Market Size
Medical Grade Protective Mask Filter Material is a critical component of the personal protective equipment used to prevent the spread of infectious diseases. The market research report on this industry segment provides an overview of the market, including segmentation by type (PP Nonwoven, PTFE Nonwoven), application (Medical, Industrial, Individual), and region (North America, Asia Pacific, Middle East, Africa, Australia, and Europe). The report profiles key market players, including Toray, Fiberweb, Mogul, Monadnock Non-Woven, Kimberly-Clark, Freudenberg, Berry Global, Don & Low, PEGAS NONWOVENS, Irema, 3M, Uniquetex, Gulsan Group, Avgol, CHTC Jiahua Nonwoven, JOFO, TEDA Filter, Yanjiang Group, Zisun Technology, Ruiguang Group, Qingdao Yihe Nonwoven, Liyang New Material, Shanghai Kingfo Industrial, Xinlong Group. Additionally, the report covers the regulatory and legal factors specific to market conditions, which can impact the growth and expansion of the industry. The demand for medical grade protective mask filter materials is expected to continue to rise as the world navigates through the ongoing COVID-19 pandemic, making it a critical and rapidly evolving market with great potential for growth in the years to come.
Medical Grade Protective Mask Filter Material Market Key Player
Toray
Fiberweb
Mogul
Monadnock Non-Woven
Kimberly-Clark
Get an Exclusive Discount on this https://www.reportprime.com/enquiry/request-discount/11368
Medical Grade Protective Mask Filter Material Market Segment Analysis
The Medical Grade Protective Mask Filter Material market is expected to witness significant growth in the coming years. The target market includes healthcare professionals, patients with respiratory diseases, and individuals in high-risk areas such as airports, public transportation, and crowded spaces. The increasing prevalence of respiratory diseases such as COVID-19, influenza, tuberculosis, and other airborne illnesses is expected to drive the growth of the market.
The major factors driving revenue growth of the Medical Grade Protective Mask Filter Material market include the surge in demand for personal protective equipment due to the COVID-19 outbreak, the growing awareness about the importance of mask-wearing in preventing the spread of respiratory diseases, and the technological advancements in filter material manufacturing.
The latest trend followed by the Medical Grade Protective Mask Filter Material market is the development of reusable and washable mask filters, which provide high-level protection and are cost-effective. The major challenges faced by the market include the shortage of raw materials, supply chain disruptions due to the pandemic, and the price fluctuations of filter materials.
The global Medical Grade Protective Mask Filter Material market is projected to grow at a CAGR of 3.30% during the forecast period (2022-2030). The Asia Pacific region is expected to dominate the market due to the increasing demand for personal protective equipment in China, India, and Japan.
The report recommends market players to focus on expanding their product portfolio, building strong distribution networks, and investing in R&D for the development of innovative and cost-effective filter materials. Moreover, companies should also adopt sustainable and eco-friendly manufacturing techniques to address the environmental concerns associated with the disposal of mask filters.
In conclusion, the Medical Grade Protective Mask Filter Material market is poised for significant growth in the coming years, driven by the high demand for personal protective equipment and the growing awareness about the importance of mask-wearing in preventing the spread of respiratory diseases. However, the market also faces challenges such as supply chain disruptions and price fluctuations, which need to be addressed by adopting innovative and sustainable manufacturing techniques.
This report covers impact on COVID-19 and Russia-Ukraine wars in detail.
Purchase This Report: https://www.reportprime.com/checkout?id=11368&price=3590
Market Segmentation (by Application):
Medical
Industrial
Individual
Information is sourced from www.reportprime.com
| chiragreportprime1 |
1,640,318 | PCR Tubes And PCR Plates Market Size, Type, segmentation, growth and forecast 2023-2030 | PCR Tubes And PCR Plates Market The PCR Tubes And PCR Plates Market is expected to grow from USD... | 0 | 2023-10-19T23:32:57 | https://dev.to/chiragreportprime2/pcr-tubes-and-pcr-plates-market-size-type-segmentation-growth-and-forecast-2023-2030-9ah | marketing, business, growth, marketresearch |

PCR Tubes And PCR Plates Market
The PCR Tubes And PCR Plates Market is expected to grow from USD 1.70 Billion in 2022 to USD 2.20 Billion by 2030, at a CAGR of 3.10% during the forecast period.
Get the Sample Report: https://www.reportprime.com/enquiry/sample-report/11377
PCR Tubes And PCR Plates Market Size
PCR Tubes and PCR Plates are essential tools for the Polymerase Chain Reaction (PCR) process used in molecular biology. PCR Tubes are small plastic tubes designed to hold PCR samples while PCR Plates are flat plastic plates containing multiple wells for holding PCR samples. The global market research report on PCR Tubes and PCR Plates includes segmentation based on type (Non Skirted, Half Skirted, Up Skirted, Full Skirted, Others), application (Biomedicine, Genetic, Others), region (North America, Asia Pacific, Middle East, Africa, Australia, and Europe), and market players (Kisker Biotech, BIOplastics, J&K Scientific, Ratiolab, Capp, BrandTech, Ahn, Eppendorf, Corning, Globe Scientific, Deltalab, Biosigma). The key regulatory and legal factors specific to market conditions include strict guidelines for the manufacturing and handling of PCR tubes and plates, and quality control standards for PCR reagents and enzymes. The demand for PCR Tubes and PCR Plates is expected to increase with the growing demand for PCR-based genetic testing and research, especially in emerging economies.
PCR Tubes And PCR Plates Market Key Player
Kisker Biotech
BIOplastics
J&K Scientific
Ratiolab
Capp
Buy Now & Get Exclusive Discount on this https://www.reportprime.com/enquiry/request-discount/11377
PCR Tubes And PCR Plates Market Segment Analysis
PCR Tubes and PCR Plates are essential laboratory consumables used for a variety of applications such as amplification of DNA and RNA in molecular biology research, diagnostic testing, and drug discovery. The global PCR Tubes and PCR Plates market is expected to witness significant growth in the coming years due to the increasing demand for these consumables in various applications.
The major factors driving the revenue growth of the PCR Tubes and PCR Plates market include the growing prevalence of infectious diseases, genetic disorders, and cancer, technological advancements in PCR technology, and increasing funding for research activities. Additionally, the rising adoption of PCR-based diagnostics and the growing demand for personalized medicine are also expected to boost market growth.
The latest trends in the PCR Tubes and PCR Plates market include an increasing focus on miniaturization, automation, and customization of PCR plates to enhance efficiency and reduce costs. The development of novel PCR technologies, such as real-time PCR and digital PCR, is also driving market growth. The rising demand for multi-colored and high-throughput PCR Plates and Tubes is expected to further propel market growth in the coming years.
However, the PCR Tubes and PCR Plates market faces certain challenges that may impede its growth. These challenges include the high cost of PCR-based testing, the lack of skilled professionals, and regulatory concerns related to the use of PCR-based diagnostics. The shortage of funding for research activities and the high competition in the market also pose significant challenges to market players.
The main findings of the report suggest that the global PCR Tubes and PCR Plates market is expected to grow at a CAGR of over 3.10% during the forecast period. Asia-Pacific is expected to witness the highest growth due to the increasing investments in research activities, the rising prevalence of infectious diseases, and growing awareness about personalized medicine.
To capitalize on the opportunities in the PCR Tubes and PCR Plates market, market players should focus on developing innovative and cost-effective products, partnerships, mergers, and acquisitions, and expanding their geographical reach. Market players should also consider investing in R&D activities to develop new and advanced PCR technologies to cater to the growing demand in the market.
This report covers impact on COVID-19 and Russia-Ukraine wars in detail.
Purchase This Report: https://www.reportprime.com/checkout?id=11377&price=3590
Market Segmentation (by Application):
Biomedicine
Genetic
Others
Information is sourced from www.reportprime.com
| chiragreportprime2 |
1,640,399 | Eu Capacito: cursos gratuitos de IA, IOT, Azure e muito mais | A “Eu Capacito” se orgulha de ter impactado positivamente a vida de mais de 1,5 milhão de usuários... | 0 | 2023-10-20T23:52:35 | https://guiadeti.com.br/eu-capacito-cursos-gratuitos-ti/ | cursogratuito, bigdata, blockchain, cursosgratuitos | ---
title: Eu Capacito: cursos gratuitos de IA, IOT, Azure e muito mais
published: true
date: 2023-10-19 19:13:21 UTC
tags: CursoGratuito,bigdata,blockchain,cursosgratuitos
canonical_url: https://guiadeti.com.br/eu-capacito-cursos-gratuitos-ti/
---
A “Eu Capacito” se orgulha de ter impactado positivamente a vida de mais de 1,5 milhão de usuários através de seus cursos gratuitos.
Esta plataforma de renome se consolidou como um recurso inestimável para profissionais e entusiastas que buscam aprimorar suas habilidades e conhecimentos em um mundo cada vez mais dominado pela tecnologia.
Os cursos oferecidos, como IA, IOT, Azure, Big Data & Analytics, [Python](https://guiadeti.com.br/curso-de-algoritmos/ "Curso de Algoritmos: Aprenda a Base da Ciência da Computação"), Cybersecurity, Blockchain e Fundamentos de [Java](https://guiadeti.com.br/curso-de-algoritmos/ "Curso de Algoritmos: Aprenda a Base da Ciência da Computação"), são testemunhos da qualidade e da diversidade de aprendizado oferecidos.
Na interseção entre inovação e educação, “Eu Capacito” serve como um catalisador para a empregabilidade e o empreendedorismo, fornecendo não apenas educação em tecnologia, mas também habilidades de negócios cruciais para navegar com sucesso no cenário corporativo contemporâneo.
Em um mundo onde a tecnologia evolui a passos largos, a “Eu Capacito” está comprometida em garantir que nenhum indivíduo seja deixado para trás, oferecendo uma oportunidade de se capacitar e, assim, garantir sua relevância no mercado de trabalho competitivo.
## Cursos Eu Capacito
A variedade de cursos oferecidos é ampla e refinada, com tópicos populares como Big Data & Analytics, Python, Cybersecurity, Blockchain e Fundamentos de Java, destacando-se entre os favoritos. Cada curso é um mergulho profundo em conhecimentos práticos e teóricos, projetados para equipar os alunos com habilidades tangíveis e aplicáveis no mundo real.

_Plataforma Eu Capacito_
### Capacitação Profissional e Empreendedorismo
“Eu Capacito” não se limita à educação tecnológica. A plataforma é um recurso holístico que também oferece aprendizado nas áreas de negócios e habilidades adicionais cruciais para aspirantes a profissionais corporativos e empreendedores.
É uma ponte entre o conhecimento atual e as habilidades necessárias para prosperar em um ambiente de trabalho dinâmico e competitivo.
### Não Fique Para Trás
Em uma era onde a tecnologia permeia todos os aspectos de nossas vidas, manter-se atualizado e equipado com as habilidades relevantes não é apenas uma vantagem, mas uma necessidade. “Eu Capacito” oferece a oportunidade perfeita para você se capacitar e assegurar que sua relevância no mercado de trabalho não apenas persista, mas floresça. Navegue pela onda da tecnologia com confiança e competência, preparando-se para o futuro, hoje.
### Cursos Ofertados
- A Quarta revolução industrial e o empreendedorismo
- Adicione lógica aos seus aplicativos com C#
- Análise de dados no Power BI
- Aplicativos Conectados
- Aprenda a desenvolver aplicações nativas em nuvem
- Aprenda sobre DevOps em Oracle Cloud
- Armazene dados no Azure
- Azure para o Engenheiro de Dados
- Big Data & Analytics
- Blockchain – Primeiros passos
- Blockchain Advanced
- Business Administration Specialist
- Business Intelligence (BI)
- Business intelligence
- CRM para o Salesforce Classic
- Ciência de Dados – Primeiros passos
- Cloud Fundamentals, Administration and Solution Architect
- Cloud Onboard Online
- Comece com a análise de dados da Microsoft
- Como Desenvolver a Colaboração entre a Equipe
- Como falar em público
- Como superar os desafios e se reinventar em tempos difíceis
- Computação em Nuvem – Primeiros passos
- Conceitos básicos da rede de computadores
- Conceitos básicos da segurança de rede
- Conceitos de nuvem – Princípios da computação em nuvem
- Conecte seus serviços
- Conecte-se!
- Conheça a plataforma de Marketing Digital da Oracle!
- Conheça o ERP em nuvem Oracle!
- Construa a sua carreira como administrador Salesforce
- Construa a sua carreira como desenvolvedor Salesforce
- Construa a sua carreira de Marketing com Salesforce
- Crie aplicativos sem servidor
- Crie confiança com a autopromoção
- Crie modelos preditivos sem código com o Azure Machine Learning
- Crie soluções de IA com o Azure Machine Learning
- Crie um site simples usando HTML, CSS e JavaScript
- Customer Experience Management
- Cybersecurity
- Dê seus primeiros passos com o C#
- Dê seus primeiros passos com o Python
- Desenvolva aplicaçoes em Oracle Cloud – Oracle Cloud Developer
- [Design Gráfico](https://guiadeti.com.br/curso-de-adobe-illustrator/ "Curso de Adobe Illustrator: Aprenda Design Gráfico Profissional")
- Design Thinking
- Design Thinking – Primeiros passos
- DevOps & Agile Culture
- Diversidade, Inclusão e Pertencimento para Líderes e Gerentes
- Domine as Competências Pessoais mais requisitadas no Mercado de Trabalho
- Empreendedor aumente sua produtividade no trabalho
- Empreendedorismo com foco em dispositivos móveis
- Empreendedorismo criativo
- Empreendedorismo e a segmentação em campanhas
- Empreendedorismo na rede de display
- Empreendedorismo no seu negócio online
- Empreendedorismo para cuidar do dinheiro
- Empreendendo na Transformação Digital
- Empreendendo para ser encontrado por clientes
- Empreender e entender o comportamento web
- Empreender seu negócio em outros países
- Entenda o mundo dos dados em constante evolução
- Entenda os conceitos básicos da codificação
- Entenda os conceitos básicos do aprendizado de máquina
- Equilíbrio emocional na Era da Incerteza
- Estratégias de Arquitetura de Big Data
- Explore a IA conversacional
- Explore a pesquisa visual computacional no Microsoft Azure
- Explore o processamento de idioma natural
- Fundamentos da segurança cibernética
- Fundamentos do marketing digital
- Gere conteúdo para promover sua empresa
- Gerenciamento de identidade e acesso no Azure Active Directory
- Gerencie operações de segurança no Azure
- Gerencie recursos no Azure
- Gestão Financeira de Empresas
- Gestão de Infraestrutura de TI
- Habilidades Profissionais
- Implante um site com as máquinas virtuais do Azure
- Implante um site no Azure com o Serviço de Aplicativo do Azure
- Implemente a segurança de rede no Azure
- Implemente a segurança do gerenciamento de recursos no Azure
- Implemente segurança do host de máquina virtual no Azure
- Inteligência Artificial (IA) – Primeiros passos
- Inteligência Artificial aplicada a comunicação
- Inteligência artificial e Computacional
- Internet das Coisas (IoT)
- Internet das Coisas – Introdução
- Introdução à Cibersegurança
- Introdução à Inteligência Artificial no Azure
- Introdução à comunicação corporativa
- Java Fundamentos
- Lógica de Programação
- Leadership Communication
- Liderança e Gestão Empresarial
- Lightning Experience Reports & Dashboards Specialist
- Linux Fundamentos
- Marketing em Plataformas de Social Media
- Melhore a segurança da sua empresa on-line
- Melhores práticas para trabalhar em casa
- Mobile Marketing
- Modele dados no Power BI
- Negócios – Empreendedorismo
- Networking eficiente
- O que é Python?
- OCI Explorer – Conheça o Oracle Cloud Infrastructure, a nuvem Oracle
- Orientação financeira e empreendedorismo
- Planejamento de desenvolvimento de carreira
- Plataforma de E-commerce Oracle para Empreendedores
- Preparando-se para o seu primeiro emprego
- Prepare dados para análise
- Princípios de Inteligência Artificial e seus impactos na comunicação e em nossas vidas
- Processamento de dados de grande escala com o Azure Data Lake Storage Gen2
- Programação orientada a objeto em Python
- Proteja os aplicativos de nuvem no Azure
- Proteja seus dados em nuvem
- Python
- Salesforce Fundamentals
- Security Specialist
- Segurança Digital – Cybersecurity Essentials
- Segurança cibernética – Primeiros passos
- Seja um administrador do Azure
- Soluções Tecnológicas Emergentes
- Torne-se um Especialista em Atendimento ao Cliente
- Torne-se um Gestor de Projetos
- Torne-se um Profissional de Vendas
- Trabalhe com os dados NoSQL no Azure Cosmos DB
- Trabalhe com os dados relacionais no Azure
- Trabalho Remoto: Colaboração, foco e produtividade
- Transmita suas ideias com textos e imagens
- UX – Experiência do Usuário
- User Experience
- Venha programar em Java
- Visualize dados no Power BI
- iOS nativo
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2023/01/Cursos-Gratuitos-Cisco-280x210.png" alt="Cursos Gratuitos Cisco" decoding="async" title="Cursos Gratuitos Cisco"></span>
</div>
<span>Cursos Cisco: Python, Cybersecurity e Outras Opções Gratuitas</span> <a href="https://guiadeti.com.br/cursos-cisco-gratuitos-redes-cybersecurity/" title="Cursos Cisco: Python, Cybersecurity e Outras Opções Gratuitas"></a>
</div>
</div>
<div>
<div>
<div>
<span><img width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2023/10/Curso-de-UX-Design-280x210.png" alt="Curso de UX Design" decoding="async" title="Curso de UX Design"></span>
</div>
<span>Curso de UX Design na Prática Online e 100% Gratuito</span> <a href="https://guiadeti.com.br/curso-de-ux-design-gratuito-ebac/" title="Curso de UX Design na Prática Online e 100% Gratuito"></a>
</div>
</div>
<div>
<div>
<div>
<span><img width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2023/06/UFRGS-Cursos-TI-1-280x210.png" alt="UFRGS Cursos TI" decoding="async" title="UFRGS Cursos TI"></span>
</div>
<span>UFRGS Cursos: Linux, Design, Sentimentos em Computação Grátis</span> <a href="https://guiadeti.com.br/ufrgs-cursos-ti-gratuitos/" title="UFRGS Cursos: Linux, Design, Sentimentos em Computação Grátis"></a>
</div>
</div>
<div>
<div>
<div>
<span><img width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2023/07/Cursos-Samsung-Ocean-1-280x210.png" alt="Cursos Samsung Ocean" decoding="async" title="Cursos Samsung Ocean"></span>
</div>
<span>Cursos Samsung Ocean: IA e Python e Mais Opções 100% Gratuitas</span> <a href="https://guiadeti.com.br/cursos-samsung-ocean-gratis/" title="Cursos Samsung Ocean: IA e Python e Mais Opções 100% Gratuitas"></a>
</div>
</div>
</div>
</aside>
## Eu Capacito
A “Eu Capacito” é mais que uma plataforma de aprendizado; é um movimento destinado a empoderar indivíduos, equipando-os com habilidades essenciais para prosperar no cenário profissional moderno. Desde a sua concepção, tem se dedicado a oferecer cursos gratuitos e de alta qualidade, ajudando pessoas de todas as esferas da vida a se tornarem não apenas empregáveis, mas também inovadoras em suas respectivas áreas.
### Cursos Inovadores e Diversificados
Com um repertório de cursos que inclui, mas não se limita a, Big Data & Analytics, Python, Cybersecurity e Blockchain, a “Eu Capacito” se destaca por seu compromisso com a excelência educacional. Cada curso é meticulosamente curado para garantir que atenda às demandas emergentes do mercado de trabalho, preparando os alunos para enfrentar e superar os desafios contemporâneos.
### Um Caminho para o Empreendedorismo
Além de habilidades técnicas, a “Eu Capacito” compreende a importância do empreendedorismo no mundo atual. A plataforma se tornou um recurso vital para aspirantes a empreendedores, oferecendo insights, estratégias e conhecimentos necessários para transformar ideias inovadoras em negócios sustentáveis e prósperos.
### Acessibilidade e Inclusão
O que torna a “Eu Capacito” excepcional é sua acessibilidade. Rompendo barreiras geográficas, financeiras e sociais, a plataforma assegura que a educação de qualidade seja um direito, não um privilégio. Todos, independente de sua localização ou background, têm a oportunidade de acessar um mundo de conhecimento e oportunidades.
### Preparação para o Futuro
A “Eu Capacito” não apenas prepara os indivíduos para o presente, mas também para um futuro dominado pela inovação tecnológica. Cada curso é uma etapa para um futuro onde a tecnologia e as habilidades humanas coexistem e se complementam, garantindo que os profissionais estejam sempre um passo à frente nas tendências emergentes.
<iframe title="Eu Capacito: quem somos" width="1170" height="658" src="https://www.youtube.com/embed/6VP2il_PSXQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## Inscreva-se na ‘Eu Capacito’ hoje e transforme seu futuro com educação de qualidade gratuita!
As [inscrições para os cursos da Eu Capacito](https://www.eucapacito.com.br/cursos) devem ser realizadas no site da plataforma Eu Capacito.
## Compartilhe o poder do aprendizado! Divulgue a ‘Eu Capacito’ e transforme vidas!
Gostou do conteúdo sobre os cursos gratuitos? Então compartilhe com todos seus amigos e em suas redes sociais!
O post [Eu Capacito: cursos gratuitos de IA, IOT, Azure e muito mais](https://guiadeti.com.br/eu-capacito-cursos-gratuitos-ti/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,640,404 | The Magic of Event Listeners | As I delve further into the tech world, I've come to realize the vast array of functionalities that... | 0 | 2023-10-20T02:45:44 | https://dev.to/devincb93/the-magic-of-event-listeners-52lf | webdev, javascript, beginners, programming | As I delve further into the tech world, I've come to realize the vast array of functionalities that code can provide. Among these, event listeners stand out as a crucial tool that can be leveraged to enhance user interaction and engagement. In this blog post, we'll delve into the world of event listeners with a focus on the following key types:
**1. Mouseover:** The 'mouseover' event listener allows developers to create subtle interactive effects when a user hovers their cursor over an element. This is particularly useful for enhancing user experience on websites and applications, making them more intuitive and engaging.
**2. Mouseout:** Complementing the 'mouseover' event is 'mouseout,' which triggers actions when a user moves their cursor away from an element. This event can be employed to provide feedback or to restore an element to its initial state, contributing to a polished user interface.
**3. Click:** The 'click' event listener is a fundamental component of user interface design. It enables developers to respond to user input, creating interactive elements that respond with precision when users click on buttons, links, or other interactive components.
**4. Submit:** When it comes to forms and data submission, the 'submit' event listener is indispensable. It facilitates the handling of form submissions, ensuring that data is processed accurately and efficiently, which is crucial for businesses collecting user information or feedback.
Event listeners play a critical role in shaping user experiences and adding functionality to web applications. By mastering these tools, developers can create polished, user-friendly interfaces that meet the needs of businesses and their customers. Stay tuned as we explore further aspects of the coding world, uncovering valuable tools and techniques that can drive success in the competitive tech landscape. | devincb93 |
1,640,518 | Understanding the Art of Defense: Social Engineering Attack Detection and Defense | Technological advancements and increased connectivity have made our lives more convenient, cyber... | 0 | 2023-10-20T05:45:50 | https://dev.to/indrajithbandara/understanding-the-art-of-defense-social-engineering-attack-detection-and-defense-4lji | cybersecurity | Technological advancements and increased connectivity have made our lives more convenient, cyber threats have also evolved, becoming more sophisticated and deceptive. Social engineering attacks, in particular, have become a substantial menace to cybersecurity. These attacks prey on human vulnerabilities rather than technical weaknesses, making them a challenging adversary. Let's unveil some of the various techniques attackers employ to deceive and manipulate individuals. and fill our toolkit with a robust set of defense strategies to recognize and thwart these bad actors.
### The Art of Deception: Unveiling Social Engineering Techniques
Phishing:
The Hook That Casts a Wide Net
Phishing is one of the most prevalent social engineering techniques. Attackers disguise themselves as trustworthy entities, such as banks or familiar brands, and send emails or messages containing malicious links or attachments. These deceptions aim to extract sensitive information or deliver malware. To protect yourself:
## Always be skeptical of unsolicited requests for personal information.
Verify the sender's legitimacy through official channels, not just the contact details provided in the message.
Hover over links to reveal the actual URL before clicking on them.
Pretexting: Crafting a Convincing Backstory
Pretexting is a manipulative tactic where attackers create elaborate backstories to gain trust and access to sensitive information. They may pose as colleagues, government officials, or service providers. To stay safe:
Always verify the identity of anyone requesting confidential data.
Cross-check the information they provide with official records.
Follow a strict "need-to-know" policy, disclosing only what is essential.
Baiting: Temptation Lures You In
Baiting attacks lure victims into compromising situations by offering something appealing, such as free software, movies, or music downloads. These temptations conceal malware or spyware, ready to infiltrate your system. Protect yourself by:
## Exercising caution when downloading files or software from unverified sources.
Using reputable sources for your downloads.
Keeping your devices updated with the latest security patches.
Using tools like Virus Total to investigate URLs or files
Building Resilience: Recognizing and Defending Against Social Engineering Attacks
Skepticism as a Shield
Skepticism is your first line of defense. Always question the legitimacy of unsolicited communications. If something seems too good to be true or raises even the slightest doubt, take a step back and investigate further.
## Identity Verification
Verifying the identity of the person or entity making a request is crucial. Utilize official contact details, double-check the information they provide, and don't hesitate to confirm their identity through separate channels if needed.
## Ongoing Security Awareness Training
Stay informed and vigilant through ongoing security awareness training. Cybersecurity is an ever-evolving field, and keeping up with the latest threats and defense strategies is essential to staying safe.
## Empowerment through Knowledge
The cornerstone of a resilient cybersecurity strategy is empowering individuals with the knowledge to recognize and defend against social engineering attacks. By being vigilant, verifying identities, and staying informed, we can collectively fortify our defenses and outsmart the cunning tactics employed by attackers.
Social engineering attacks are an ever-persistent threat that can target anyone, from individuals to organizations. By understanding the tactics attackers use and adopting a proactive defense strategy, we can navigate the digital landscape with greater confidence. Remember, skepticism is your ally, identity verification is your safeguard, and knowledge is your armor against attacks. Stay informed, stay vigilant, and stay safe in the digital world. | indrajithbandara |
1,640,524 | "Expert Estimating Services in Canada and the USA: Unlocking Your Project's Potential" | In the dynamic landscape of construction and project management, accurate estimates are the... | 0 | 2023-10-20T06:02:44 | https://dev.to/estimate/expert-estimating-services-in-canada-and-the-usa-unlocking-your-projects-potential-5b81 | <p>In the dynamic landscape of construction and project management, accurate estimates are the foundation for successful outcomes. In both Canada and the USA, where the construction industry is booming, having the right estimating services in place can make all the difference. This article delves into the world of <a title="opening estimating services" href="https://veracityestimating.com/opening-estimating-services/" target="_blank">opening estimating services</a>, exploring how they can benefit your projects, and providing insight into what to look for when choosing the right estimating service provider.</p>
<p><strong>1. The Role of Estimating Services</strong></p>
<p>Estimating services in Canada and the USA play a pivotal role in project planning and execution. They provide an essential framework for making informed decisions, ensuring that your project stays within budget, and is completed on time. Whether you are a seasoned contractor or a novice project manager, having expert estimating services at your disposal is like having a trusted compass to guide you through the intricacies of your project.</p>
<p><strong>2. Precision at its Core</strong></p>
<p>The hallmark of a reliable estimating service is precision. These services are equipped with the latest tools and technologies to offer accurate cost assessments, ensuring that you don't end up with unexpected budget overruns. This level of precision can save you time, money, and a lot of headaches down the line.</p>
<p><strong>3. Tailored Solutions for Unique Projects</strong></p>
<p>One size doesn't fit all in the construction and project management world. The best estimating services in Canada and the USA understand this and provide customized solutions that cater to the unique demands of each project. This adaptability ensures that your estimates are as accurate as possible and that your project has the best chance of success.</p>
<p><strong>4. Speed and Efficiency</strong></p>
<p>Time is money, especially in the fast-paced construction industry. Top-notch estimating services are not only accurate but also efficient. They can swiftly generate estimates, helping you make crucial decisions without unnecessary delays.</p>
<p><strong>5. Cost Savings and Risk Management</strong></p>
<p>By partnering with the right estimating service, you can proactively identify potential cost-saving opportunities and manage risks effectively. This strategic approach can make a significant difference in the overall cost of your project.</p>
<p><strong>6. Choosing the Right Estimating Service Provider</strong></p>
<p>Selecting the ideal estimating service provider is as crucial as the estimates themselves. When looking for estimating services in Canada and the USA, consider factors like experience, technology, reputation, and customer reviews. It's also vital to ensure they have a team of seasoned professionals who understand the intricacies of the construction industry in your region.</p>
<p><strong>Conclusion</strong></p>
<p>Estimating services in Canada and the USA are not just about crunching numbers; they are the guiding stars that can lead your project to success. By embracing precision, customization, efficiency, and cost-effective strategies, these services can empower you to achieve your project goals with confidence. When it comes to choosing an estimating service provider, remember that the right partner can make all the difference in your project's journey. So, harness the power of estimating services and set your projects on a path to success in the bustling construction industry of Canada and the USA.</p> | estimate | |
1,640,697 | Benefits of Enrolling in a Local IELTS Coaching Center | Enrolling in a coaching center when preparing for your IELTS is a great way to improve your skills.... | 0 | 2023-10-20T09:04:11 | https://dev.to/sgold7593/benefits-of-enrolling-in-a-local-ielts-coaching-center-9e2 | <p>Enrolling in a coaching center when preparing for your IELTS is a great way to improve your skills. After all, you need to prove that you have exceptional mastery over reading, writing, speaking, and listening. Now, you can choose online live tutoring sessions. But the better option is certainly a local IELTS coaching center.</p>
<p>If you wonder, “Where should I get IELTS coaching near me?” consider <strong><a href="https://abroadvice.com/"><u>AbroAdvice.com</u></a></strong> as an option. This service offers online and offline classes. There are several other websites that offer only online coaching and many offline tutoring centers. Usually, students opt for online sessions because of their convenience. However, local coaching classes have lots of benefits to offer.</p>
<p><strong>Immediate assistance with doubts</strong></p>
<p>The best part of local offline coaching centers is that you don't have to wait to get your doubts addressed. If you don't understand something, you can immediately ask the tutor to pause and explain the topic again. However, such instantaneous resolution isn’t possible for lectures that are usually pre-recorded. When in doubt, you have to wait until the experts are available to answer your queries. You can't get instant assistance with your doubts here.</p>
<p><strong>Healthy competition with peers</strong></p>
<p>If you opt for online classes, you’ll miss out on learning together with a group of peers. Offline classes always get an additional point in their favor because of this. You can get to know others and receive encouragement to improve your skills. A healthy competition can motivate you to do better. Besides, you can learn from others' mistakes as well. So, the next time you search for "IELTS classes near me,” <strong><a href="https://abroadvice.com/ielts"><u>check here</u></a></strong> to enquire about offline classes.</p>
<p><strong>Personalized classes with experts</strong></p>
<p>It’s impossible to get personalized classes online because most are pre-recorded sessions. But in the case of offline classes, the tutor is well-versed in your strengths and weaknesses. So they can personalize the sessions to cater to your needs. Their guidance will allow you to figure out where you're lacking so you can focus more on those areas.</p>
<p> </p> | sgold7593 | |
1,640,862 | Managing Your Money Abroad: Automating Expense Tracking With Receipt Recognition | Just a few months ago I went off on a new journey in search of new experiences in Portugal! However,... | 0 | 2023-10-20T10:44:41 | https://dev.to/kwan/managing-your-money-abroad-automating-expense-tracking-with-receipt-recognition-1meg | ai, ocr, javascript | _**Just a few months ago I went off on a new journey in search of new experiences in Portugal! However, during the initial months of my stay, I encountered unexpected challenges, such as the lack of financial control…**_
Like many others who move to a new country, I found myself caught up in a big net of activities and emotions. From finding accommodation to dealing with bureaucratic processes and adjusting to a new culture, my attention was divided between various tasks. So much so that I neglected the importance of maintaining proper organization and efficient expense tracking.
The absence of accurate tracking of my expenses made it challenging to understand where my money was being allocated and that made me take poor financial decisions. Fortunately, that experience gave me an idea, a simple program utilizing image recognition through artificial intelligence that could track supermarket expenses, providing expenses data for a clear and organized view of spending for people moving here.
In this article, I will share with you the steps to implement this solution. You will discover how to use image recognition to read supermarket receipts and potentially integrate this data into a finance management program that can bring significant benefits to tracking your personal finances in Portugal.
---
## Step-by-Step Guide: Automating Expense Tracking with Receipt Recognition
### 1. Extracting Text from Receipts Using OCR
To begin with, we implement Optical Character Recognition (OCR) techniques to extract text from supermarket receipts. This is possible thanks to the powerful pytesseract library. By using OpenAI’s ChatGPT, an advanced language model, we have crafted the code below to execute this task:

### 2. Processing Extracted Text and Extracting Products
Once the text is extracted, it goes through further processing to extract relevant information, such as the purchased products. This can be done with various methods, including regular expressions (Regex) or other text processing techniques. In this example, we output the extracted products as a list of strings:

### 3. Converting Products to JSON Format
To facilitate data integration with potential finance management programs, the extracted products are converted into JSON format. JSON (JavaScript Object Notation) provides a standardized way to structure and exchange data. Here is the code snippet for this:

The main code finishes wrapping every method implemented:

Automating Expense Tracking with Receipt Recognition: Final Thoughts
Controlling your finances when moving to Portugal is crucial for a smooth transition and a solid financial foundation. The code provided in this article offers an initial solution using image recognition to read supermarket receipts and extract product data. This data can easily be integrated into various finance management programs, giving individuals and businesses tools to track expenses, make informed financial decisions, and simplify accounting tasks.
It’s important to note that the solution presented here is not the final product, but a starting point for further development. As someone who has experienced the challenges of managing finances during a relocation, I understand the importance of flexibility and customization. In addition, as an advocate of open-source code, I believe in the power of collaboration and the freedom to modify and improve upon existing solutions. By utilizing the code snippets provided and tailoring them to your specific needs, you can create a personalized finance management system that truly fits your requirements and maybe help others colleagues.
Looking ahead, there are countless possibilities of integrating this code into a diverse array of projects. A personal budgeting application that empowers you to take charge of your finances, or an AI-powered financial assistant that guides you towards financial success.
Article written by Joubert Filho and originally published at https://kwan.com/blog/managing-your-money-abroad-automating-expense-tracking-with-receipt-recognition on August 23, 2023. | kwan |
1,641,006 | How to get the hex code of a key from my keyboard ? 👓 | If you are studying your keyboard's behavior and want to know the hexadecimal value of the key event... | 0 | 2023-10-20T13:49:02 | https://dev.to/stacy-roll/how-to-get-the-code-of-a-letter-or-an-event-from-my-keyboard-in-hex-6o2 | rust, tutorial, keyboar, learning |
If you are studying your keyboard's behavior and want to know the hexadecimal value of the key event you wish to execute when pressing it, you can run this Rust code to obtain the exact value. If you don't have Rust installed, **you can find the mapped values in** the [k_board](https://docs.rs/k_board/1.1.2/src/k_board/lib.rs.html#61-258) library.
If you have rust installed, run the following command:
`cargo new keys && cd keys && cargo add k_board`
copy the code into the _main.rs_ file, then
`cargo run`
```rust
use std::io::{Read, Write};
fn main() -> std::io::Result<()> {
loop {
let _ = get_key();
}
}
pub fn get_key() -> std::io::Result<()> {
let termios_enviroment: k_board::termios = k_board::setup_raw_mode().unwrap();
std::io::stdout().flush().unwrap();
let mut buffer: [u8; 3] = [0; 3];
std::io::stdin().read(&mut buffer).unwrap();
if buffer[0] != 0x00 {
println!(
"[0x{:x?}, 0x{:x?}, 0x{:x?}]",
buffer[0], buffer[1], buffer[2]
);
}
std::io::stdout().flush().unwrap();
k_board::restore_termios(&termios_enviroment).unwrap();
Ok(())
}
```
## Preview
 | stacy-roll |
1,641,190 | Power Automate - Environment Variables | If you want to do ALM (application Life-cycle Management), and you should, then you need environment... | 21,919 | 2023-11-06T07:40:36 | https://dev.to/wyattdave/power-automate-environment-variables-a5d | powerplatform, powerautomate, lowcode, rpa | If you want to do ALM (application Life-cycle Management), and you should, then you need environment variables.
ALM has 2 main benefits
1. Separation of duty
2. Protection of prod
Separation of duty means the person who created it can't deploy it. Why is that important, well we need checks and balances, apps and automation's can have a lot of power, power that can be misused. Just imagine if the flow had access to sensitive data, and the developer used that access to distribute the data. Often this is a legal requirement, it ensures critical company data that may impact share price cannot be tampered with (e.g in the US Sox).
Protection of prod is kind of obvious, but let me expand anyway. By stopping edits or deployments straight to prod we stop many unexpected system issues. Test is a gatekeeper to check that the new deployment works as expected. We would all want crashes in test rather than on our critical prod automation's/apps.
---
Sorry I digressed a little there, so environment variables. Well if we are deploying from dev to test to prod, then we need a quick and easy way to flip between those environments. The power of the Power Platform is its connections, so we need those variables to flip between environments.
An obvious example would be SharePoint, lets say you have a flow triggered by a file upload to a library, extracts data and saves to a list. We need a way to trigger, dev, test and prod, but not at the same time. We also need to make sure the extracted test data does not go into the prod list.
The key info we need to understand are the right strategy and right environment type.
## Strategy
There are a couple of approaches to your strategy, multiple versions/environments or sub environments.
The best example for this is our old friend SharePoint, do you go with:
Multiple sites, all duplicates with the same lists, libraries settings.
or
One site, multiple lists / libraries
This also works for other connections like Outlook (different Shared mailbox or same but different folders).
My preference is different versions/environments. As that is the most accurate and allows testing of updates the the version/environment settings.
## Types
There are 5 types of environment variables and using the right one is key
- Data source (SharePoint/Dataverse/SAP only)
- Text
- Decimal Number
- Yes/No
- JSON
- Secret

**Data source**
This is great for SharePoint sites and lists, it uses the SharePoint API to list all your available sites with a display name. You can then use the site environment variable for the list variable, again showing the available lists.
There are a couple of problems with this though, first the api isn't 100% perfect, so often sites and lists/libraries don't show up to be selected. The next is its limitations.

First its only good for the 3 connections above, I would love it for MS forms, planner and more. But the big one is it doesn't cover everything like you would think. Take SharePoint, if you use it to select a Excel action from a SharePoint list you get an error. You have to input the library id, same with the Word connector too.
For me the whole point of LowCode is ease of use, the data source environment variable is exactly what it should be, but its implementation is half baked.
**Text**
This one is an obvious one, it holds a string. As mentioned above this is great for library/drive ids, and a must for when you use any of the HTTP actions. This can also be used for date/times, but it should never be used for confidential data like passwords or secrets, as it is stored in plain text with the environment variable value Dataverse table.
**Decimal Number**
Another obvious one, instead of going with an integer they use a float. Not sure how many decimal places, but never had an issue (also works fine with integers - whole numbers).
**Secret**
This one is a great idea but it's just out of preview so still not fully featured. As I said string variables are not right for passwords, so where do you put them, in a secret. The secret term is used for Azure spn's etc, and this is what this was originally designed for (though I don't see why you cant use it with any password). A simple approach would just be a string variable but masked, but that is not how this works and for good reason. Instead this links to a key vault (currently only Azure Key vault). So what is the main benefit of this approach, well its rotations. Passwords should be rotated, key vaults to this automatically. This means you don't have to manually update them and then go into the Power Platform environment and update them. Another benefit is security, as its now stored in industry leading locations rather then a Dataverse table (not to say that's not secure)

You need all the Kay vault information, so you will need support from your Azure team (if its a different team).
A good blog to read about Secret varaiables can be found here:
https://blog.yannickreekmans.be/environment-variable-secrets-azure-key-vault-improvement/
**Yes/No**
Another obvious one, this is a simple boolean (true/false) also known as a flag. If you select Yes then it returns true, no false. Great for a conditions (especially if() expressions as this can be used for the first input).

**JSON**
This one is my favourite as it's incredibly powerful but rarely used. The JSON allows you to create a fully body as an environment variable, why is this cool, 3 reasons.
1- Multi variable. I see lots of solutions with multiple environment variables, these are a pain to keep upto date. If they are all linked (so generally get updated at the same time), then they should all be replaced with one JSON varaible.

```
{
"text": "hello world",
"number": 1,
"boolean": true
}
```

It takes a little more effort in the flow, as the quick select can't be used (instead an expression with the sub variable name)
```
parameters('testObject (wd_testObject)')['text']
```
2- Arrays, yep you can store whole arrays in there too. No more pulling in a SharePoint list for a config array, store it in a variable.
```
{
"array": [
{
"text": "hello",
"index": 1
},
{
"text": "world",
"index": 2
}
]
}
```

To use them is the same as the multi variable, just add it to the loop and then use the items() expression.
```
parameters('testObject (wd_testObject)')['array']
```
```
items('Apply_to_each')?['text']
```
3- Test data, you can store an entire body return from an action and reuse for testing. By this I mean you can copy from something like a SharePoint get items the return body, paste it in the variable and then call it.
---
As you can see there are more then just text variables, and used correctly they can push your automatons to be enterprise ready
| wyattdave |
1,641,274 | Pledge #hacktoberfest23 | Intro So this is my first post here, so I guess Hello World! or Hello Dev.to community! This year,... | 0 | 2023-10-20T18:55:38 | https://dev.to/gonmmarques/pledge-hacktoberfest23-41kb | hacktoberfest23 | **Intro**
So this is my first post here, so I guess Hello World! or Hello Dev.to community!
This year, I'm also taking part of the community effort to pledge to the event.
I'm dedicating myself to actively contribute to open-source projects throughout the month of October.
**Goals**
My goals for Hacktoberfest 2023 are twofold. Firstly, I aim to find more open-source projects, even if not directly contributing to them.
Secondly, to complete the 4 PR accepted but more importantly to be in the first 50k contributors to have a PR accepted and so have a tree planted on our behalf.
**Pledge**
As such I pledge to making contributions to the best of my ability, to the open-source community. | gonmmarques |
1,641,672 | Proseso ng paggawa ng aking (dating) Website | Dahil sa pagkabagot noong pandemic, gumawa ako ng sarili kong website, mula noon hanggang ngayon di... | 0 | 2023-10-21T05:36:36 | https://dev.to/torten/proseso-ng-paggawa-ng-aking-dating-website-3kb3 | webdev, svelte, javascript, programming | Dahil sa pagkabagot noong pandemic, gumawa ako ng sarili kong website, mula noon hanggang ngayon di ko pa rin alam kung ano ang ilalagay ko dito. Sana naman ay sa susunod na rebisyon ng aking website ay magbago ito.
## Detalyeng Teknikal
Ang website na ito ay gumamit ng [TailwindCSS](https://tailwindcss.com/) para sa disenyo ng website, ginamit ko ang [SvelteKit](https://kit.svelte.dev) framework para mas madali ang aking proseso ng paggawa ng website.
## Proseso
Ang una kong pinagtuunan ng pansin sa paggagawa ng website ay ang blog function nito, kaya, sinundan ko lang itong blog post ni Josh Collinsworth na nangngangalang [Let's learn SvelteKit by building a static Markdown blog from scratch](https://joshcollinsworth.com/blog/build-static-sveltekit-markdown-blog).
At.... tad-aahh?

Uhuh. Dahil sa mga sirkumstansyang di ko alam. Ito ang kinalabasan ng mga articles.

Medyo nagana naman, kung alam mo yung mismong URL ng isang artikulo, mababasa mo naman. Mayroon lang mga bagay na, sa aking opinyon, ay hindi nakalulugod sa aking mga mata.
Pagkatapos kong gawin yung blog, ginawa ko ang page para sa aking ginawang projects (na kailangan nang i-update)
At ginawa na lastly ang aking homepage.
Madali lang para sakin gawin itong proyektong ito, kung meron ka lang hilig para matuto at mayroong mga oras na pwedeng sayanin.
*Maaring makita ang buong code sa aking [Github repository](https://github.com/tortendes/torten-xyz)* | torten |
1,641,688 | Top NFT Marketplace Business for Entrepreneurs in 2024 | The NFT marketplace is booming nowadays, and entrepreneurs have a golden opportunity to jump into... | 0 | 2023-10-21T06:12:44 | https://dev.to/aanaethan/top-nft-marketplace-business-for-entrepreneurs-in-2024-ao0 | nft, business, technology, blockchain | 
The NFT marketplace is booming nowadays, and entrepreneurs have a golden opportunity to jump into this exciting space. NFTs have transformed the way we think about digital assets and ownership. Let's dive into these exceptional opportunities for entrepreneurs.
**Top NFT Marketplaces for Business:**
**OpenSea:**
**[OpenSea NFT marketplace](https://www.kryptobees.com/opensea-clone-script)** is one of the popular and ever-evolving marketplaces in the NFT space. It has multiple advantages, features, and countless possibilities to make the NFT world innovative. Let’s get to know it!
**Diverse Asset Types:** OpenSea supports a wide range of NFT categories, providing users with diverse investment and collection opportunities.
**High Liquidity:** It's one of the largest and most liquid NFT marketplaces, making it easier for users to buy and sell NFTs.
**Interoperability:** OpenSea accommodates NFTs from multiple blockchains, enhancing accessibility and trade opportunities.
**Community Engagement:** Users can actively participate in discussions, collaborations, and creator followings.
**User-Friendly:** Its intuitive interface makes it accessible for newcomers to the NFT space.
**Solanart:**
**[Solanart NFT marketplace](https://www.kryptobees.com/solanart-clone-script)** operates on the Solana blockchain, known for its speed and cost-effectiveness. It is one of the most popular and high-revenue-generated platforms. So every entrepreneur should know their advantages.
**Solana Speed:** Solanart operates on the Solana blockchain, known for its fast and cost-effective transactions.
**Digital Art and Collectibles:** The platform offers a variety of NFTs, including digital art and collectibles.
**SOL Token Integration:** Utilizing the SOL token for transactions and participation within the ecosystem.
**Liquidity and Rewards:** Users can participate in liquidity pools and stake SOL to earn rewards, potentially generating income.
**NFT Farming:** Solanart offers NFT farming, allowing users to earn NFT rewards by staking tokens.
**Virtual Real Estate Marketplace:**
Virtual real estate marketplaces, such as those within metaverse platforms like Sandbox, have emerged as exciting spaces for entrepreneurs in the NFT world. But what exactly does this entail?
Virtual real estate refers to parcels of land or digital spaces within virtual worlds. Just as in the physical world, virtual land can vary in location, size, and desirability. These metaverse platforms enable users to buy, sell, and develop their virtual properties.
**Why Invest in a Virtual Real Estate Marketplace?**
**Expanding Metaverse Ecosystem:** As metaverse platforms continue to grow, so does the demand for virtual land. Entrepreneurs who create NFT marketplaces dedicated to virtual real estate can capitalize on this expanding market.
**Creativity and Collaboration:** Virtual real estate is a canvas for creators, architects, and builders. By enabling users to design and develop their virtual properties, you can foster a creative community.
**Ownership and Investment:** Just like physical real estate, virtual land can be appreciated in value. Entrepreneurs can develop tools for assessing property value, transactions, and community interaction.
**Niche Focus:** Entrepreneurs can choose to specialize in a particular metaverse, catering to the unique preferences of their community.
**Conclusion**
NFT marketplaces are booming in the blockchain industry. It's not just an innovative platform, it is an opportunity to get worldwide digital assets in a simple way. It is a great business module for entrepreneurs to enter the NFT world. With unique features, reduced entry barriers, and niche-specific offerings, these platforms are catering to a diverse array of creators and collectors, making NFTs more accessible and engaging. As they continue to evolve, NFT marketplaces are shaping the future of digital space, and we can only imagine what exciting opportunities lie ahead in this ever-evolving landscape. | aanaethan |
1,641,753 | ChatGPT vs. Human Chat: Pros and Cons | AI makes many online functions faster: generating blog posts and other written material and even... | 0 | 2023-10-21T07:43:03 | https://dev.to/iamfranklin/chatgpt-vs-human-chat-pros-and-cons-55ni | AI makes many online functions faster: generating blog posts and other written material and even customer service responses on an online text chat. Let’s analyze the pros and cons of each option to find out which one is best for running your business.
<h2>ChatGPT Pros and Cons</h2>
Here are the pros and cons of utilizing ChatGPT for online chat interactions.
<h3>Pros</h3>
<ul>
<li>The AI software is available 24/7 when human representatives are not on shift.</li>
<li>The program is more scalable in that it can handle multiple customer chats at one time much more than a human representative can balance simultaneously.</li>
<li>Smaller businesses benefit from utilizing AI over human representatives as they start so that they can budget accordingly.</li>
<li>Responses to customers happen at an instant speed so customers are especially satisfied when disgruntled with an issue with their merchandise or how an order went.</li>
<li></li>
</ul>
<h3>Cons</h3>
<ul>
<li>ChatGPT does not have the emotional intelligence that a human representative would have. Therefore, the responses the AI program gives may come off as rough around the edges.</li>
<li>The AI software is knowledgeable in varied language patterns. However, it can get confused if the customer’s issue is too complex, which could frustrate customers further if ChatGPT gives them a response that is not correct to their situation.</li>
<li>There are issues with AI getting a hold of sensitive customer information such as payment card numbers, which can pose an issue down the line if there is a data breach from a hacker.</li>
</ul>
Speaking of AI, did you know that casinos also utilize AI for various applications? When you play <a href="https://casino.netbet.it/quick-games">quick games NetBet</a>, the AI system can figure out whether you played the casino games fairly or if you cheated somehow to get a payout. It doesn’t matter whether you are playing Casino, Spaceman, or Aviator, there is always AI monitoring your gaming style to ensure you are playing fairly.
<h2>Human Chat Pros and Cons</h2>
Here are the pros and cons of having human representatives in your business’s customer service chat rooms.
<h3>Pros</h3>
<ul>
<li>Heightened emotional intelligence to relate to the customer better than AI.</li>
<li>Human representatives have memory if they are helping the same customer as they did before. They can show empathy and appreciation as they work with this customer again unlike AI.</li>
<li>Human chat reps are better at handling clients’ private information with much less chance of data breaches than if AI was handling the conversation.</li>
<li>Easier and more efficient to build a customer service relationship with clients when human reps are on online chat.</li>
<li>They can give more accurate responses to complex customer service issues unlike AI can do.</li>
</ul>
<h3>Cons</h3>
<ul>
<li>Response times to clients will be slower than ChatGPT because humans cannot give responses at an instant speed. However, they can increase response time based on how fast they type.</li>
<li>Some are not available 24/7 much like AI customer chat programs are. Certain human chat reps may only work from morning until early evening without someone on for the graveyard shift unless the business operates a 24/7 human chat schedule.</li>
<li>It takes more money to hire and train human chat representatives whereas ChatGPT is already prepared to go in instantaneously to answer customer questions.</li>
</ul>
<h2>Finding the Right Balance of Emotional Intelligence When Using ChatGPT for Customer Service Responses</h2>
While ChatGPT can provide much faster response times to disgruntled customers on an online text chat than the human writer can, it does not have the same emotional intelligence as humans do. So how do you continue to respond to customers quickly while still having a heart?
If you plan to use AI to respond to your business’s customer service chats, then you should have your representatives edit the ChatGPT responses just a tad so that there’s some human element to it. You don’t want your company to sound standoffish or misunderstanding about various customer service issues. Hence, following this tactic is your best bet for maintaining a strong connection with clients while continuing to generate faster responses.
<h2>So Which Is Better? ChatGPT or Human Chat?</h2>
ChatGPT is better at response time while human chat has more emotional intelligence incorporated. There is not one option that is better than the other. Hence, analyze the pros and cons to make your final decision on whether you will use ChatGPT or human chat for your business’s online chat rooms and go from there. | iamfranklin | |
1,641,766 | Latest Technology News in Pakistan | Latest Technology News in Pakistan Introduction In recent years, Pakistan has witnessed significant... | 0 | 2023-10-21T08:13:18 | https://dev.to/technewspakistan/latest-technology-news-in-pakistan-4722 | Latest Technology News in Pakistan
Introduction
In recent years, Pakistan has witnessed significant growth in the field of technology. The government of Pakistan has been actively investing in artificial intelligence research, leading to numerous advancements in the tech sector. This article will delve into the latest [technology news in Pakistan](https://www.informal.pk/technology), highlighting the country's commitment to innovation and development.
Pakistan's Government Investment in Artificial Intelligence Research
Pakistan's government has recognized the importance of artificial intelligence (AI) and its potential to revolutionize various industries. In a bid to stay at the forefront of technological advancements, the government has allocated funds towards AI research. This investment aims to benefit different sectors, such as healthcare, education, and agriculture.
Technological Advancements in Healthcare
One sector that has greatly benefited from the government's investment in AI research is healthcare. With the development of advanced machine learning algorithms, healthcare professionals in Pakistan can now utilize AI-powered diagnosis systems. These systems analyze medical data to provide accurate and timely diagnoses, improving patient outcomes and reducing the burden on healthcare facilities.
AI in Education
Artificial intelligence has also made its way into the Pakistani education system. Intelligent tutoring systems, powered by AI, are being used to enhance the learning experience for students. These systems personalize education by adapting to individual students' needs, providing tailored learning materials, and monitoring progress. This innovation has the potential to bridge the education gap and improve the overall quality of education in Pakistan.
Revolutionizing Agriculture with AI
Pakistan's agricultural sector is another area that has seen remarkable advancements through the integration of AI technology. AI-powered systems can monitor crop health, predict weather patterns, and optimize irrigation processes. This data-driven approach has the potential to increase crop yields, improve resource utilization, and ensure food security in the country.
Future Prospects of AI in Pakistan
As the government continues to invest in AI research, the future prospects for technology in Pakistan look promising. The integration of AI in industries such as finance, transportation, and manufacturing is expected to streamline processes, improve efficiency, and boost economic growth. This commitment to technological advancements positions Pakistan as a key player in the global [tech news Pakistan](https://www.informal.pk/technology) landscape.
Challenges and Opportunities
While the progress in AI research and development in Pakistan is commendable, there are still challenges to overcome. Building a skilled workforce capable of harnessing AI's potential is crucial. Initiatives involving collaborations between academia and industry can help address this challenge by creating specialized programs to train professionals in AI technology.
Additionally, ensuring the ethical use of AI remains a concern globally. Pakistan must establish regulations and guidelines to govern the application of AI to uphold privacy, security, and fairness. Striking a balance between innovation and responsible AI usage will pave the way for sustainable and inclusive development.
In conclusion, Pakistan's government's investment in artificial intelligence research is transforming the technological landscape of the country. With advancements in healthcare, education, agriculture, and other sectors, Pakistan is poised to become a hub of innovation and growth. As the nation continues to embrace AI, challenges will need to be addressed, and ethical considerations must shape its implementation. The future of technology in Pakistan is bright, and the impact of AI on various industries is set to redefine the nation's potential.
SEO meta-description:
Stay updated with the latest technology news in Pakistan as the government invests in artificial intelligence research. Discover how AI is revolutionizing healthcare, education, and agriculture.
Image credit: Pexels | technewspakistan | |
1,641,778 | What Is LangChain? Unlocking the Potential of LLMs | LangChain is an open-source framework crafted to ease the development of applications that leverage... | 0 | 2023-10-21T08:35:10 | https://medium.com/altern/what-is-langchain-unlocking-the-potential-of-llms-8ee8623888c6 | llm, langchain, ai, machinelearning |
[LangChain](https://langchain.com) is an open-source framework crafted to ease the development of applications that leverage LLMs. Its primary function is to provide a standardized interface for chains, offering extensive integrations with other tools, and facilitating end-to-end chains for typical applications. The utilization of LangChain extends across various applications similar to those of language models, including document analysis, summarization, chatbots, and code analysis.
Moreover, LangChain is engineered to be a bridge for software developers working within the realms of AI and machine learning, enabling the amalgamation of large language models with other external components to craft LLM-powered applications. This framework thus acts as a conduit allowing developers to work seamlessly with AI, specifically in developing applications powered by language models.
Delving deeper, LangChain's uniqueness lies in its modular framework which is compatible with Python and JavaScript. This modularity simplifies the development of applications powered by generative AI language models. Furthermore, LangChain is not merely confined to textual data; it extends its capabilities to be data-aware, meaning it can connect a language model to various data sources, making it a robust tool for developing context-aware applications.
Lastly, an intriguing aspect of LangChain is its ability to streamline interaction with various large language model providers like OpenAI, Cohere, Bloom, and Huggingface, among others. It further propels its utility by enabling the creation of Chains, which are logical links between one or more LLMs, thus providing a robust library for developers aiming to integrate multiple LLMs in their applications.
In the next posts, I will try to provide you with guides for using different llms along with langchain and python.
### Related Links
- [LangChain official website](https://langchain.com)
- [LangChain on Altern](https://altern.ai/product/langchain)
| dariubs |
1,641,953 | What is npm? | 🔍 What is npm? 📦 npm is an open-source repository of tools & libraries created by the... | 0 | 2023-10-21T13:10:50 | https://dev.to/omkarbhavare/npm-120c | webdev, javascript, npm, developer | 🔍 What is npm? 📦
npm is an open-source repository of tools & libraries created by the developers. It serves as a central-hub for JavaScript Community, offering a vast collection of resusable code packages to enhance and accelerate their projects.
🤝 Benefits of npm
- Easy Package Management: npm simplifies the process of managing external libraries and tools, making it easy to add, update, and remove dependencies in your project.
- Code Reusability: Access to a vast repository of packages enables developers to reuse existing, well-tested code, saving time and effort in building common functionalities.
- Version Control: npm helps ensure version consistency by allowing developers to specify the exact versions or version ranges of dependencies, reducing compatibility issues.
- Community Collaboration: npm fosters community collaboration by allowing developers to share their code, contributing to a collaborative ecosystem of shared knowledge and resources.
🛠️ How NPM Works: A Step-by-Step Guide 🚀
1.Project Initialization: Developers kickstart a new project by running npm init in the project directory. This command generates a package.json file that acts as a manifest for the project, containing metadata and dependency information.
```
npm init
```
2.Developers specify these dependencies in the package.json file and install them using the npm install command.
```
npm install lodash
```
3.NPM automatically creates a node_modules directory where it stores the installed packages. These dependencies are recorded in the package.json file under the dependencies section.
```
"dependencies": {
"lodash": "^4.17.21"
}
```
4.NPM uses semantic versioning to define package versions. Developers can specify version ranges in the package.json file to allow for flexibility in updating packages.
```
"dependencies": {
"lodash": "^4.17.21"
}
```
5.The package-lock.json file provides specific version information for each package that is currently being used in a project.
```
{
"name": "my-blog",
"version": "1.0.0",
"lockfileVersion": 2,
"dependencies": {
"lodash": {
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
"integrity": "sha512-...
"dev": true
}
}
}
```
Coming up next: 🔍
1. Semantic Versioning:
2. Dependency vs DevDependency:
3. Local & Global Package Installation:
Stay tuned for more insights into the world of JavaScript development! 🚀📦 | omkarbhavare |
1,642,120 | The Top Entrepreneurs of Marbella & The Costa Del Sol | Top Entrepreneurs Marbella has been created for local businesses who desire more from their... | 0 | 2023-10-21T17:06:00 | https://dev.to/thetopmarbella/the-top-entrepreneurs-of-marbella-the-costa-del-sol-5cgn | Top [Entrepreneurs](https://thetopmarbella.com/) Marbella has been created for local businesses who desire more from their advertising budget. | thetopmarbella | |
1,642,131 | Demystifying Android Architecture: A City Analogy | Introduction: Welcome to the bustling city of Android! Just like a city, Android has a... | 0 | 2023-11-08T19:51:01 | https://dev.to/olaoluwa99/demystifying-android-architecture-a-city-analogy-3l4f | ### Introduction:
Welcome to the bustling city of Android! Just like a city, Android has a complex architecture composed of various layers and components that work together to provide a seamless user experience. In this blog post, we'll use a city analogy to demystify Android's architecture, from its foundation to the user-facing applications.

1. **The Kernel:** *The City's Infrastructure*
At the heart of our Android city lies the Kernel. This is akin to the city's infrastructure, which includes roads, utilities, and the essential services that keep the city running. The Kernel manages core functions like memory, hardware interaction, and task scheduling, providing the foundation for the entire Android ecosystem.
2. **Hardware:** *Diverse Buildings*
Our Android city is home to a diverse range of buildings, each representing different Android devices. Just like buildings have unique features and purposes, Android devices come in various forms with distinct hardware capabilities.
3. **The HAL:** *The Highway System*
The Hardware Abstraction Layer (HAL) acts as the city's network of highways. These highways connect the various buildings (hardware components) to the city's core. The HAL standardizes communication between the Kernel and the hardware, ensuring that different devices can interact seamlessly with the Android OS.
4. **Operating System:** *City Administration*
The operating system, including Android OS, serves as the city's administration. It manages resources, enforces rules (permissions), and coordinates various activities within the city. It's responsible for maintaining order and ensuring the smooth operation of the entire ecosystem.
5. **The Framework:** *City Services*
Imagine Android's framework as the city's public services. It provides a wide array of libraries and APIs that app developers can use to build their applications. These services include user interface elements, databases, and more, making it easier for developers to create functional and appealing apps.
6. **Apps:** *The City's Residents*
In our Android city, apps are like the residents. They use the city's infrastructure, services, and resources to perform various tasks. Just as residents have different needs, apps serve a wide range of functions, from social networking to productivity and entertainment.
7. **App Store:** *The Marketplace*
App stores, such as Google Play, act as the marketplace in our city. Here, residents (users) can browse, discover, and download new services (apps) to meet their needs and enhance their experiences in the city.
8. **Security:** *Police and Safety*
Security measures in Android are akin to the city's police force and safety regulations. They protect the residents (user data and devices) from threats, enforce rules to maintain order, and ensure a secure environment within the city.
9. **Updates:** *City Improvements*
Periodic updates to the Android OS are like the city's improvements, such as road maintenance or infrastructure upgrades. These updates make the city run more efficiently, add new features, and enhance security, ensuring that the Android city stays up to date.
### Conclusion:
By using this city analogy, we hope to have demystified the intricate architecture of Android. Just like a well-organized city, Android's architecture is designed to provide a seamless and secure user experience. Understanding these layers and components can help you navigate the Android ecosystem more effectively and appreciate the complexity behind the devices we use every day. Whether you're an app developer or a user, this knowledge empowers you to make the most of the Android city you inhabit. | olaoluwa99 | |
1,642,247 | Designing for Inclusivity: Creating Accessible Web Experiences for All | In today's digital age, the internet has become an integral part of our lives, connecting people from... | 0 | 2023-10-24T09:00:00 | https://dev.to/sajeeb_me/designing-for-inclusivity-creating-accessible-web-experiences-for-all-4c0e | webdev, webaccessibility, inclusivedesign, digitalinclusion | In today's digital age, the internet has become an integral part of our lives, connecting people from all walks of life. As the digital landscape continues to expand, it's crucial to ensure that everyone can access and benefit from web content. This is where web accessibility comes into play. In this article, we will explore the importance of web accessibility and provide practical advice on making web applications inclusive for people with disabilities.
### The Importance of Web Accessibility
Web accessibility is the practice of ensuring that websites and web applications are usable by people with disabilities. This inclusivity is not only a matter of social responsibility but also a legal requirement in many countries. Here are some key reasons why web accessibility is vital:
**1. Legal Compliance:** Numerous countries have enacted laws that mandate web accessibility. The Americans with Disabilities Act (ADA) in the United States, for example, requires businesses and organizations to ensure that their digital platforms are accessible to all.
**2. Expanding User Base:** Making your website accessible means you can reach a broader audience. This includes people with disabilities, but also individuals using various devices and screen sizes.
**3. Enhanced User Experience:** A more accessible website is user-friendly for everyone. Improving the navigation, readability, and usability of your site benefits all users.
**4. Brand Reputation:** Demonstrating a commitment to inclusivity enhances your brand's reputation, showing that you care about the well-being of all your users.
### Practical Advice for Creating Accessible Web Experiences
**1. Use Semantic HTML:** Structure your web content using semantic HTML elements. This makes it easier for screen readers to interpret and present the content to users with visual impairments. Use heading tags (h1, h2, h3, etc.) for proper document structure and labels for form fields.
**2. Provide Alt Text for Images:** All images should include descriptive alternative text (alt text) to ensure that people who are visually impaired can understand the content and context of the images.
**3. Ensure Keyboard Accessibility:** Test your website's functionality using only a keyboard for navigation. All interactive elements, such as buttons and forms, should be accessible via keyboard input.
**4. Caption and Transcribe Multimedia:** Videos and audio content should have captions and transcripts. This benefits not only deaf and hard of hearing users but also those in quiet environments or non-native speakers.
**5. Consider Color and Contrast:** Avoid relying solely on color to convey information. Ensure that there is sufficient contrast between text and background colors to make content readable for people with low vision or color blindness.
**6. Test with Screen Readers:** Regularly test your website with screen reader software like JAWS, NVDA, or VoiceOver to identify and fix accessibility issues. Pay attention to how the screen reader reads your content and navigates your site.
**7. Follow Web Content Accessibility Guidelines (WCAG):** Familiarize yourself with the WCAG guidelines, which provide a detailed framework for web accessibility. Adhering to these standards can significantly improve your website's accessibility.
**8. Conduct User Testing:** Involve individuals with disabilities in the testing phase to gather valuable feedback. Their insights can uncover issues you might have missed.
**9. Prioritize Mobile Accessibility:** Ensure that your website is responsive and accessible on mobile devices. This is essential for users who rely on touch screens, voice commands, or assistive technologies on their mobile devices.
**10. Stay Informed and Evolve:** Web accessibility is an ongoing process. Stay up-to-date with the latest accessibility best practices and technologies. Continue to improve and refine your website's accessibility over time.
### Conclusion
Designing for inclusivity by creating accessible web experiences for all is not just a good practice; it's a legal and ethical imperative. Prioritizing web accessibility ensures that your content reaches a wider audience and enhances the user experience for everyone. By implementing practical advice and following the Web Content Accessibility Guidelines (WCAG), you can make your website a welcoming place for all users, regardless of their abilities. Embracing web accessibility isn't just the right thing to do; it's a strategic move for the future of your digital presence. | sajeeb_me |
1,642,301 | Demystifying Cloud Computing: Exploring the Core Pillars of Private and Public Clouds, IaaS, PaaS, and SaaS | The fundamental pillars of cloud computing include various deployment models and service... | 0 | 2023-12-02T18:18:24 | https://dev.to/msfaizi/demystifying-cloud-computing-exploring-the-core-pillars-of-private-and-public-clouds-iaas-paas-and-saas-n8c | cloud, aws, devops, opensource | ## The fundamental pillars of cloud computing include various deployment models and service models. Let's explore each of these pillars:
**1. Private Cloud:**

- A private cloud is a cloud environment exclusively used by a single organization.
- It is typically hosted on the organization's own infrastructure or in a dedicated environment provided by a cloud service provider.
- Private clouds offer enhanced security and control, making them suitable for organizations with strict data privacy and compliance requirements.
**2. Public Cloud:**

- A public cloud is a cloud infrastructure that is shared by multiple organizations and is owned and operated by a cloud service provider.
- It offers on-demand resources and services to the public over the internet.
- Public clouds are highly scalable, cost-effective, and suitable for a wide range of applications and workloads.
**3. Private Cloud vs. Public Cloud:**

- This pillar explores the differences and trade-offs between private and public clouds, helping organizations choose the right cloud deployment model for their specific needs.
- Considerations include factors like control, security, cost, scalability, and management.
**4. Brief Introduction of Infrastructure as a Service (IaaS):**

- IaaS is one of the three primary cloud service models and provides virtualized computing resources over the internet.
With IaaS, organizations can rent virtual machines, storage, and networking resources, allowing them to set up and manage their own infrastructure.
- Users have control over the OS and software stack installed on the virtual machines.
**5. Brief Introduction of Platform as a Service (PaaS):**

- PaaS is a cloud service model that abstracts away the underlying infrastructure, focusing on providing a platform for developing, deploying, and managing applications.
- Developers can build and deploy applications without worrying about managing the underlying infrastructure.
- PaaS offerings often include development tools, databases, and application hosting environments.
**6. Brief Introduction of Software as a Service (SaaS):**

- SaaS is a cloud service model where software applications are hosted and provided to users over the internet on a subscription basis.
- Users access SaaS applications through web browsers without the need for local installation.
- Common examples of SaaS applications include email services, customer relationship management (CRM) software, and productivity tools like Google Workspace and Microsoft 365.
Follow me on [Linkedin](https://www.linkedin.com/in/shamim-ansari7/) | [Twitter](https://twitter.com/shamim_faizi786) for more such contents | msfaizi |
1,642,389 | 678. Valid Parenthesis String | Problem: 678. Valid Parenthesis String First Approach Can utilize a similar method like validating... | 0 | 2023-10-22T03:12:52 | https://dev.to/truongductri01/678-valid-parenthesis-string-58op | medium, leetcode | Problem: [678. Valid Parenthesis String](https://leetcode.com/problems/valid-parenthesis-string/description/)
<strong>First Approach</strong>
Can utilize a similar method like validating parenthesis.
Loop through each char `c` in the string `s` and check:
- if `c == *`, increase `countStar`
- if `c == (`, increase `countLeft`
- if `c == )`, decrease `countLeft` if `countLeft` > 0. otherwise, if `countStar` > 0, decrease it, else return false.
At the end, if `countStar >= countLeft`, return true, else, return false.
This may sound like a good approach but this will fail since there is another restriction that we have not taken into consideration:
`Left parenthesis '(' must go before the corresponding right parenthesis ')'.`
That means if there is a star and a left parenthesis, the star must come after.
<strong> Correct Approach</strong>
We will utilize Stack and Queue to keep track of the index of the left parenthesis and the stars.
After removing all the right parenthesis like mentioned above, we will check the positions of the stars and left parenthesis to make sure it meet the requirements
Here is a simple explanation:
- used stack for left paren and queue for star?
- will store the int value (the index of the parent and star only)
- if there is some right paren without left paren
remove star from the top of the queue (smallest positions)
when there are only left paren left.
- pop from the end => get the largest, if it does not satisfy => false
- convert queue into stack for popped out
- keep doing until empty
```java
class Solution {
public boolean checkValidString(String s) {
Queue<Integer> starQ = new LinkedList<>();
Stack<Integer> leftS = new Stack<>();
for (int i = 0; i < s.length(); i++) {
char c = s.charAt(i);
if (c == '*') {
starQ.add(i);
} else if (c == '(') {
leftS.add(i);
} else {
// ) only
if (!leftS.isEmpty()) {
leftS.pop();
} else {
if (!starQ.isEmpty()) {
starQ.remove();
} else {
return false;
}
}
}
}
if (leftS.isEmpty()) {
return true;
} else if (starQ.isEmpty()) {
// this means still has left but no star
return false;
} else {
// convert
Stack<Integer> starS = new Stack<>();
while (!starQ.isEmpty()) {
starS.add(starQ.remove());
}
while (!starS.isEmpty() && !leftS.isEmpty()) {
int left = leftS.pop();
int star = starS.pop();
if (star < left) {
return false;
}
}
if (leftS.isEmpty()) {
return true;
}
return false;
}
}
}
``` | truongductri01 |
1,642,544 | Copy blobs between Storage Accounts with an Azure Function | Introduction In this post I’m going to detail how you can use the blob copy feature in... | 0 | 2023-12-05T16:01:27 | https://rios.engineer/copy-blobs-between-storage-accounts-with-an-azure-function/ | azure, automation, functions, azcopy | ---
title: Copy blobs between Storage Accounts with an Azure Function
published: true
date: 2023-04-02 13:40:34 UTC
tags: Azure,Automation,Functions,AzCopy
canonical_url: https://rios.engineer/copy-blobs-between-storage-accounts-with-an-azure-function/
---
## Introduction
In this post I’m going to detail how you can use the blob copy feature in AzCopy within an Azure Function.
This is especially useful if your requirement is to move or copy data from one Azure Storage Account to another based on a blob storage trigger event.
When a new upload to the blob container is detected, the function will trigger and copy to the destination of choice.
In my example, it will be a test.csv file dropped into the source Storage Account that then copies over to destination Storage Account.
I’m going to assume you already have two Storage Accounts with blob containers setup, in which you’re wanting to implement this or a similar style solution.
## Azure Function setup & trigger
Firstly, we’re going to [create an Azure Function](https://rios.engineer/automate-table-storage-backups-using-azure-function/#create-azure-function) using runtime of PowerShell in a Code publish method. I’m using a consumption plan type to reduce costs.
_I’ve selected an existing Storage Account under ‘Hosting’ when creating my function which holds my current container I want to copy from for simplicity. However, you can create a new Storage Account instead to keep the Azure Function containers separate from your existing Storage Account._

Once the deployment completes we want to create a trigger so the function will run when a new blob is uploaded to the container.
### Trigger setup and config
There’s a handy [event trigger for Azure Blob Storage](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger?tabs=in-process%2Cextensionv5&pivots=programming-language-powershell) which is perfect for this solution.

When selecting a blob storage trigger we’re presented with some additional options for the trigger criteria:
**Name:** This is the name of the trigger
**Path:** This will be the path to the container/blob, in my example you can see I’m using ‘_ **test/{name}.csv** _‘. This is so it only triggers on filenames with a .csv extension that is uploaded into the container ‘test’.
More on blob name patterns can be found here [Azure Blob storage trigger for Azure Functions | Microsoft Learn](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger?tabs=in-process%2Cextensionv5&pivots=programming-language-powershell#blob-name-patterns)
**Storage account connection:** My connection is ‘_ **AzureWebJobsStorage** _‘ as I’ve linked my Function to an existing Storage Account for this (as mentioned in the intro). But if you want to select a different storage, click on ‘New’ and select the Storage Account you want for the trigger source. More on that here [Azure Blob storage trigger for Azure Functions | Microsoft Learn](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger?tabs=in-process%2Cextensionv5&pivots=programming-language-powershell#connections)
Finally, create the trigger and wait for deployment to complete.
## Uploading AzCopy
Next we need to upload AzCopy executable into the Function so it can be invoked when the function is triggered. I’ve detailed how to upload files into an Azure Function in a previous blog post [here](https://rios.engineer/automate-table-storage-backups-using-azure-function/#deploying-azcopy-to-the-azure-function-app). Go check it out if you need a more detailed step by step guide on this part.
We will be using AzCopy v10. This can be downloaded from the Microsoft website [here](https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10#download-azcopy).
A quick recap on how to do this:
1. Go to your Azure Function and locate Deployment on the left navigation pane
2. Click Deployment Center and locate the FTPS credentials tab
3. Copy the endpoint, username & password to connect to the Azure Function via an FTP client such as [FileZilla](https://filezilla-project.org/)
4. Lastly, lets drag & drop azcopy.exe from your machine to /site/wwwroot/BlobTrigger1 (or what the trigger name is set to)

## AzCopy
Then there is the AzCopy copy command so when the Function is triggered it actions the copy required. AzCopy version 10 natively supports blob storage copy which makes this quite simple to script.
1. In the Azure Function, go to Functions and select the blog storage trigger that has been created.
2. Once in the trigger go to Code + Test with a dropdown to select the ‘run.ps1’ file – this is what executes when the function is triggered.

_run.ps1_
3. Below the default bindings and log output we can enter our AzCopy command.
My working example looks like this:
```
C:\home\site\wwwroot\BlobTrigger1\azcopy.exe copy
'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<SAS-token>'
'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<SAS-token>'
--recursive --include-pattern *.csv
```
In my example, I’m using two Blob SAS URLs that I have generated on each Storage Account ([here’s a guide on how to do this](https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/create-sas-tokens?view=form-recog-3.0.0#use-the-azure-portal)). I’ve selected the allowed resource types: container and left the rest of the defaults.
On my AzCopy command I’m using **_–recursive_** to ensure I copy across the same folder structure from the source. The **_–include-pattern \*.csv_** instructs only the CSV files within the structure to be the files that get copied across.
Alternatively, you can use many different other pattern combos to match different requirements, such as filename matches and more. [Copy blobs between Azure storage accounts with AzCopy v10 | Microsoft Learn](https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs-copy)
<mark style="background-color:#fcb900">Note: Shared Access Tokens is the only supported method for this AzCopy functionality. It appears I cannot utilise a managed identity from my testing. There is no way to authenticate in AzCopy within the Function like how you can with the Az PS modules. Let me know if you know a way around this, I was unsuccessful!</mark>
More info on how to create a Shared Access Token for a storage account: [Create a service SAS for a container or blob – Azure Storage | Microsoft Learn](https://learn.microsoft.com/en-us/azure/storage/blobs/sas-service-create?tabs=dotnet)
## Testing
Lets start by testing the functionality.
I’m matching only against .csv files for both trigger events & the AzCopy copy command.
- Upload a test.csv to Storage Account 1 (source).
- The function will trigger.
- This then executes our run.ps1 AzCopy command which will perform the copy.
Upload test.csv in the Azure Portal GUI to Storage Account 1:

After a few minutes I can see that the function has executed on the overview page (under Function Execution count). In my destination Storage Account Container I can review and check the copy was successful as the test.csv now appears.

### Checking the logs
Under the Function, select the BlobTrigger1 (or your trigger name you specified) and locate the Monitor section under Developer.

Here we can review the execution logs and review any error outputs. In my example you can see the Event Trigger output and then the copy output which was successful.
## Closing thoughts
Rather than uploading AzCopy manually into the Function I could automatically download this via PowerShell on each run (although I wanted to eliminate any delay on execution which is why I haven’t here).
Additionally, adding the Blob SAS URLs as application strings to call in the run.ps1 file as an environment variable could be a way to standardise the code going forward also.
I did run into an issue where the function ran out of memory when transferring larger files, even with a premium plan – unfortunately I didn’t get to the bottom of resolving that. Let me know if you have a similar issue and resolve it!
For house keeping you can also create a [Lifecycle management rule](https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview) to keep things tidy and clean up old files.
Hope you found this post useful and of interest, let me know your comments below.
Thanks!
Dan
The post [Copy blobs between Storage Accounts with an Azure Function](https://rios.engineer/copy-blobs-between-storage-accounts-with-an-azure-function/) appeared first on [Rios Engineer](https://rios.engineer). | riosengineer |
1,642,673 | Abstraindo a API Pública do CNJ - DataJud | Nesta semana o Filipe Deschamps postou em sua newsletter oficial sobre a API Pública do DataJud,... | 0 | 2023-10-22T13:43:07 | https://dev.to/joaotextor/abstraindo-a-api-publica-do-cnj-datajud-54d | javascript, webdev, api, programming | Nesta semana o Filipe Deschamps postou em sua newsletter oficial sobre a API Pública do DataJud, lançada pelo CNJ para oferecer de forma unificada em uma única API os metadados dos processos judiciais de todo o Brasil (exceto os que tramitam em segredo de justiça).
Como trabalho atualmente no Judiciário Federal e me interesso por esse assunto, criei uma biblioteca javascript para abstrair o uso dessa API e estou convidando a comunidade open-source brasileira para ajudar na manutenção e construção de novas features😍.
[Github do projeto](https://github.com/joaotextor/busca-processos-judiciais)
[NPM](https://www.npmjs.com/package/busca-processos-judiciais)
[Notícia oficial postada aqui no Tabnews](https://www.tabnews.com.br/NewsletterOficial/cnj-lanca-a-api-publica-do-datajud-oferecendo-acesso-geral-aos-metadados-de-processos-judiciais-em-todo-o-brasil)
Convidem seus amigos, familiares e pets para contribuir com o crescimento do projeto 😃 | joaotextor |
1,642,765 | Application Dependency Mapping: A 2024 Guide | What Is Application Dependency Mapping? Application dependency mapping is a process that... | 0 | 2023-10-22T17:02:24 | https://dev.to/giladmaayan/application-dependency-mapping-a-2024-guide-25n6 |

## What Is Application Dependency Mapping?
Application dependency mapping is a process that visualizes and documents the dependencies between software applications and the underlying IT infrastructure. In essence, it provides a detailed map of how different software applications interact with each other and with the various components of the IT infrastructure.
The [application dependency mapping process](https://faddom.com/application-dependency-mapping/) begins with the discovery phase, where all applications and infrastructure components are identified. This is followed by the dependency mapping phase, where relationships between these elements are established. The final phase is the visualization phase, where the map is created, providing a visual representation of the dependencies.
ADM is not a one-time event but an ongoing process. As new applications are added, existing ones are updated, or infrastructure components are replaced, the map needs to be updated to reflect these changes. By maintaining an up-to-date map, IT teams can ensure they have a clear understanding of their IT environment at all times.
## Benefits of Effective Application Dependency Mapping
### Enhanced Troubleshooting and Rapid Issue Resolution
When an application performance issue arises, it can be challenging to pinpoint the root cause, especially in complex IT environments. However, with an up-to-date application dependency map, IT teams can quickly identify which infrastructure components an affected application depends on. This dramatically accelerates the troubleshooting process, enabling rapid issue resolution.
In addition to aiding in troubleshooting, ADM can also help prevent issues from occurring in the first place. By understanding the dependencies between applications and infrastructure components, IT teams can proactively address potential problems before they impact application performance.
### Improved Infrastructure Planning and Optimization
ADM is also a powerful tool for infrastructure planning and optimization. By visualizing the dependencies between applications and infrastructure components, IT teams can gain valuable insights into how resources are being utilized. This can inform decisions about infrastructure upgrades, capacity planning, and resource allocation.
Furthermore, ADM can help identify underutilized resources, enabling IT teams to optimize their infrastructure and reduce costs. By ensuring resources are allocated efficiently, organizations can get the most value from their IT investments.
### Streamlined Migrations and Upgrades
Migrations and upgrades are often complex and risky endeavors. Without a clear understanding of application dependencies, these projects can lead to unexpected problems and prolonged downtime. However, with ADM, IT teams can plan and execute migrations and upgrades with greater confidence and efficiency.
For example, when planning a migration, a dependency map can help identify which applications need to be migrated together to avoid breaking dependencies. Similarly, during an upgrade, ADM can help ensure that all dependencies are maintained, reducing the risk of post-upgrade issues.
### Risk Mitigation and Enhanced Security Posture
Finally, ADM plays a crucial role in risk mitigation and enhancing an organization's security posture. By understanding the dependencies between applications and infrastructure components, IT teams can better assess the potential impact of security incidents.
For instance, if a security vulnerability is identified in a particular component, ADM can help determine which applications are at risk. This allows IT teams to prioritize their response and focus their efforts where they will have the greatest impact.
## Current and Emerging Application Dependency Mapping Use Cases
ADM is not just a tool for managing traditional IT environments. It also has significant potential in emerging areas such as cloud migrations and edge computing.
### Cloud Migrations
Cloud migrations are becoming increasingly common as organizations seek to capitalize on the benefits of cloud computing. However, these migrations can be complex and risky, especially without a clear understanding of application dependencies.
ADM can help mitigate these risks by providing a clear map of application dependencies, enabling IT teams to plan and execute their migrations with greater confidence. By understanding which applications depend on which infrastructure components, IT teams can ensure that all necessary resources are migrated to the cloud, reducing the risk of post-migration issues.
### Edge Computing
Edge computing is another area where ADM can deliver significant benefits. In edge computing environments, applications are distributed across numerous edge devices, creating a highly complex web of dependencies.
ADM can help manage this complexity by providing a clear map of these dependencies. This can inform decisions about where to deploy applications, how to allocate resources, and how to manage and troubleshoot issues. As edge computing continues to evolve, I expect ADM to play an increasingly important role in managing these environments.
### IoT Ecosystems
The Internet of Things (IoT) has ushered in a new era of connectivity, where everyday objects seamlessly interact with each other via the internet. From smart home systems to sophisticated industrial machinery, IoT devices are now an integral part of our lives. However, this interconnectedness also brings complexity.
ADM is instrumental in managing this complexity. By creating a visual map of these dependencies, you can better understand how your IoT devices interact, troubleshoot potential issues, and plan for future growth. For instance, if an IoT device fails, ADM can help you quickly identify the affected applications and services, thereby speeding up the resolution process.
Furthermore, ADM can also aid in securing IoT ecosystems. By mapping out the dependencies, you can identify potential weak points in your network, such as an outdated device that could serve as an entry point for cyber-attacks. This allows you to proactively address vulnerabilities and strengthen your overall security posture.
### 5G and Enhanced Connectivity
The advent of 5G technology promises to revolutionize connectivity by offering unprecedented speed and reduced latency. However, it also presents new challenges, particularly in managing the numerous applications and services that will inevitably rely on 5G connectivity.
ADM can play a crucial role in this context. By mapping out the dependencies between these applications, you can ensure optimal performance and minimize downtime. For example, if a certain application relies heavily on 5G connectivity, any disruption in the 5G network could impact its performance. With ADM, you can quickly identify such dependencies and take proactive measures to mitigate potential issues.
Moreover, ADM can also support efficient network planning in a 5G environment. By understanding how different applications interact with each other and the underlying 5G network, you can design a more robust and resilient network architecture.
## The ADM Process: Step-by-Step Guide
### Inventorying and Discovery: Identifying All Components
The first step in the ADM process involves inventorying and discovery, where you identify all the components in your IT environment. This includes not only applications but also servers, databases, and network devices, among others.
This step is crucial because it sets the foundation for the rest of the ADM process. By having a comprehensive understanding of your IT landscape, you can ensure a more accurate and reliable dependency map.
### Data Collection: Gathering Dependency Data
Once you've identified all the components, the next step is to gather dependency data. This involves understanding how these components interact with each other. For example, you might need to determine which applications rely on a certain database or how traffic flows between different network devices.
This step can be challenging, particularly in complex IT environments. However, various tools and technologies can aid in this process, from network monitoring software to application performance management solutions. The key is to collect as much relevant data as possible to ensure a comprehensive and accurate dependency map.
### Visualization: Building the Dependency Map
After gathering the dependency data, the next step is to build the dependency map. This involves visualizing the interactions and dependencies between the identified components.
This step is critical because it transforms the collected data into a meaningful and actionable format. With a visual dependency map, you can quickly identify potential issues, plan for changes, and make informed decisions.
### Validation and Verification
Once the dependency map is built, it's important to validate and verify it. This involves checking the map for accuracy and ensuring it reflects the current state of your IT environment.
This step is essential to ensure the reliability and usefulness of the dependency map. Remember, an inaccurate map can lead to wrong decisions and potential issues down the line.
### Continuous Monitoring and Updates
Lastly, the ADM process doesn't end with the creation of the dependency map. Given the dynamic nature of IT environments, it's crucial to continuously monitor and update the map.
This step ensures that the map remains accurate and relevant over time. By regularly updating the map, you can stay ahead of changes in your IT environment and continue to make informed decisions.
## Conclusion
With the right approach and tools, Application Dependency Mapping can become an integral part of your IT management strategy. By following the steps outlined in this guide, you can create a comprehensive and accurate dependency map that can aid in a myriad of use cases, from managing IoT ecosystems to leveraging 5G connectivity. So, take the first step today and embark on your ADM journey. | giladmaayan | |
1,642,978 | Bash Script Operators | INTRODUCTION A Bash shell script, often known as a shell script or just a shell, is a text... | 0 | 2023-10-22T23:08:17 | https://dev.to/dhebbythenerd/bash-script-operators-4e6e | beginners, devops, cloudcomputing, bash | ## INTRODUCTION
A Bash shell script, often known as a shell script or just a shell, is a text file that contains a series of commands written in the Bash (Bourne Again Shell) scripting language.
Operators are special symbols or characters used in Bash scripting to carry out various operations on values and variables; these operators are frequently employed with commands to carry out numerous operations, comparisons, and conditional checks.
This article is for beginners who are interested in learning the definition of bash operators and how to use them with Linux commands.
Note: You should have prior knowledge of Shebang and the importance of including them in the script.
You should also have knowledge of Linux permissions.
## KNOW YOUR BASH OPERATORS
There are different categories of bash operators but for the purpose of this article we will focus on four:
- Arithmetic Operators
- Relational or Comparison Operators
- Boolean or Logical Operators
- FileTest Operators
## Arithmetic Operators
These operators are used to perform mathematical operations. The following is a list of the seven bash arithmetic operators:
For the purpose of the explanation of these operators, two variables would be used:
`num1=20`
`num2=5`
**Addition** `+`: This operator is used to add two operands or concatenate two strings and returns the result of that operation.
```
echo $(( num1 + num2 ))
echo $(( 20 + 5 ))
echo $(( 25 ))
output: 25
```
**Subtraction**`-`: This operator is used to subtract one operand from the other and returns the result of that operation.
```
echo $(( num1 - num2 ))
echo $(( 20 - 5 ))
echo $(( 15 ))
output: 15
```
**Multiplication**`*`: This operator is used to multiply two operands and returns the result of that operation.
```
echo $(( num1 * num2 ))
echo $(( 20 * 5 ))
echo $(( 100 ))
output: 100
```
**Division**`/`: This operator is used to divide two operands and returns only the integer part.
```
echo $(( num1 / num2 ))
echo $(( 20 / 5 ))
echo $(( 4 ))
output: 4
```
**Modulus** `%`: This operator is used to divide two operands and returns only the remainder part.
```
echo $(( num1 % num2 ))
echo $(( 20 % 5 ))
echo $(( 0 ))
output: 0
```
**Increment**`++`: This is a unary operator used to increase the operand by one.
Using the same example, we introduce another variable:
`sum=$num1+$num2`
```
echo $(( ++sum ))
output: 26
```
**Decrement**`--`: This is a unary operator used to decrease the operand by one.
Using the same example, we introduce another variable
`sum=$num1*$num2`
```
echo $(( --sum ))
output: 99
```
**An alternative way to use arithmetic operators is to use the `expr` command. **This command is only used for integer values. This means that this command will not work with float numbers. This method will give the same output as the method above.
```
echo $( expr $num1 + $num2 )
echo $( expr $num1 - $num2 )
echo $( expr $num1 / $num2 )
echo $( expr $num1 \* $num2)
```
This is because * without \ will produce a syntax error.
## Relational Operators
These operators specify the relationship between two operands. Depending on the relationship, they either yield true or false. These operators are:
**Equal to **`(==)` or `(-eq)`: It returns `true` if the two operands are equal otherwise returns `false`.
```
var1=40
var2=40
if (( $var1 == $var2 ))
then
echo True
else
echo False
fi
```
`output: True`
Another example:
```
var1='banana'
var2='banana'
if [[ "$var1" -eq "$var2" ]]
then
echo True
else
echo False
fi
```
`output: True`
**Not Equal to **`(!=)` or `(-ne)`: It returns `true` if the two operands are not equal, otherwise it returns `false`.
```
var1=40
var2=20
if [[ "$var1" -ne "$var2" ]]
then
echo True
else
echo False
fi
Another way to write this is:
if (( "$var1" != "$var2" ))
then
echo True
else
echo False
fi
```
`output: True`
**Less than **`(<)` or `(le)`: The operator returns `true` if the first operand is less than the second operand otherwise returns `false`.
```
var1=40
var2=20
if [[ "$var1" -lt "$var2" ]]
then
echo True
else
echo False
fi
Another way to write this is:
if (( "$var1" < "$var2" ))
then
echo True
else
echo False
fi
```
`output: False`
**Greater than** `(>)` or `(-gt)`: It returns `true` if the first operand is greater than the second operand, otherwise it will return `false`
```
var1=40
var2=20
if [[ "$var1" -gt "$var2" ]]
then
echo True
else
echo False
fi
Another way to write this is:
if (( "$var1" > "$var2" ))
then
echo True
else
echo False
fi
```
`output: True`
**Less or equal to**`(<=)` or `(-le)`: This operator returns `true` if the first operand is less than or equal to the second operand otherwise returns `false`.
```
var1=40
var2=20
if [[ "$var1" -le "$var2" ]]
then
echo True
else
echo False
fi
Another way to write this is:
if (( "$var1" <= "$var2" ))
then
echo True
else
echo False
fi
```
`output: False`
**Greater or equal to** `(<=)` or `(-ge)`: This operator returns `true` if the first operand is less than or equal to the second operand otherwise returns `false`.
```
var1=40
var2=20
if [[ "$var1" -ge "$var2" ]]
then
echo True
else
echo False
fi
Another way to write this is:
if (( "$var1" >= "$var2" ))
then
echo True
else
echo False
fi
```
`output: True`
## Boolean Operators
These operators are used for comparing two values or expressions, if they are `true` then it will return `True` else `False`.
**Logical And**`(&&)`: This operator returns `true` if both the operands are `true` otherwise returns `false`.
```
var=40
if [ "$var" -gt 20 -a "$var" -lt 60 ]
then
echo true
else
echo false
fi
Alternatively, it can be written in the following ways:
if ["$var" -gt 20] && ["$var" -lt 60]
then
echo true
else
echo false
fi
or this way;
if [[ "$var" -gt 20 && "$var" -lt 60 ]]
then
echo true
else
echo false
fi
```
`output: True`
**Logical Or**`(||)`: This operator returns `true` if either of the operands is true or both the operands are true and returns `false` if none of them is false.
```
var=40
if [ "$var" -gt 60 -o "$var" -lt 50 ]
then
echo true
else
echo false
fi
Alternatively, it can be written in the following ways:
if [ "$var" -gt 60 ] || [ "$var" -lt 50 ]
then
echo true
else
echo false
fi
or this way;
if [[ "$var" -gt 60 || "$var" -lt 50 ]]
then
echo true
else
echo false
fi
```
`output: True`
**Logical Not**`(!)`: This unary operator returns `true` if the operand is false and returns `false` if the operand is true.
```
var=40
if [ ! "$var" -gt 60 -o "$var" -lt 50 ]
then
echo true
else
echo false
fi
Alternatively, it can be written in the following ways:
if [[ ! "$var" -gt 60 && "$var" -lt 50 ]]
then
echo true
else
echo false
fi
```
`False`
## File Operators
In Bash, file operators are used to verify and test files and directories. In order to show whether a certain condition is satisfied, file operators return a Boolean value (true or false).
**-b operator**: This operator determines whether or not a file is a block special file. If the file is a block special file, it returns true; otherwise, it returns false.
**-c operator**: This operator determines whether or not a file is a character special file. If the file is a character special file, it returns true; otherwise, it returns false.
**-d operator**: This operator checks if a directory. If it exists it returns true; otherwise, it returns false.
**-e operator**: This operator checks if a file exists. If it exists it returns true; otherwise, it returns false.
**-f operator**: This operator checks if a path exists and points to a regular file (not a directory or device). If it exists it returns true; otherwise, it returns false.
**-l operator**: This operator checks if the path exists and is a symbolic link. If the path exists then it returns true; otherwise, it returns false.
**-r operator**: This operator checks if the file and directory exist and are readable. If the file or directory is readable then it returns true; otherwise, it returns false.
**-s operator**: This operator checks the size of the given file. If the size of the given file is greater than 0 then it returns true; otherwise, it is false
**-w operator**: This operator checks if the file and directory exist and are writable. If the file or directory is writable then it returns true; otherwise, it returns false.
**-x operator**: This operator checks if the file and directory exist and are executable. If the file or directory is executable then it returns true; otherwise, it returns false.
The syntax to use any of the file operators is:
```
if [ fileoperator file-name ]
var=/etc/ssh/sshd_config
if [ -l $var ]
then
echo ' '
else
echo ' '
fi
```
## Conclusion
You can write Bash scripts that make decision, carry out computations, and interact with files and directories by using operators such the arithmetic, comparison, logical, and file operators. These operators are necessary for building flexible, error-handling scripts that can respond to a variety of situations.
| dhebbythenerd |
1,643,013 | What are we building⁉️ | Lucky Holders aims to redefine the NFT industry by providing a comprehensive range of utilities and... | 0 | 2023-10-23T01:10:02 | https://dev.to/luckyholders/what-are-we-building-2e1 | interview, productivity, blockchain, web3 |

Lucky Holders aims to redefine the NFT industry by providing a comprehensive range of utilities and exclusive opportunities to its holders.
“By offering a vast array of NFT-backed access and utilities, Lucky Holders seeks to empower its community members and revolutionize the way individuals engage with finance, technology, art, gaming, education, and other industries.”

By leveraging the power of blockchain technology, Lucky Holders aims to create a dynamic ecosystem that bridges the gap between traditional finance, technology, art, gaming, education, and much more.
The Lucky Holders project aims to foster a vibrant ecosystem where NFTs holders can explore, collaborate, and unlock limitless opportunities.
Stay tuned for the launch of Lucky Holders and join us on this exciting journey towards a new era of NFT-powered experiences.🧡
| luckyholders |
1,643,061 | Ansible - Part 1 | Lets Install Ansible sudo apt install software-properties-common sudo... | 0 | 2023-10-25T23:58:39 | https://dev.to/technonotes/ansible-part-1-1fgh | ### Lets Install Ansible
```
sudo apt install software-properties-common
sudo add-apt-repository --yes ppa:ansible/ansible
apt update
sudo apt install ansible
or
If any issues faced , just follow below steps according to the error message which is received in your screen.
sudo apt install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt remove ansible
sudo apt -y install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt install ansible -y
ansible --version
sudo apt remove --purge ansible
sudo add-apt-repository --remove ppa:ansible/ansible
sudo apt update
sudo apt-add-repository ppa:ansible/ansible
sudo apt update
sudo cp /etc/apt/trusted.gpg /etc/apt/trusted.gpg.d
sudo apt update
sudo apt install ansible
ansible --version
```






### Virtual Machine Manager







> **Removed the space while giving the name of the VM.**


> **PLEASE Check the ERROR**



> **This may take sometime.**





> **Make it ON**




> **Enable both options**



> **Required "To enable different IP when its CLONED"**
```
echo -n > /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id
```
## Ansible Configurations for the Target servers
> **/etc/ansible**

> **SSH Key Generation**
- ssh-keygen


> **Get the IP of the target server**

> **Add the source Ansible Machine Public key to the target servers here I have only CENTOS**
```
sh-copy-id -i id_rsa.pub sathishpy1808@192.168.122.96
```


> **Edit the files in the Ansible machine to say these are the target machines or slave machines. ( Even this can be done in HOME directory too )**


```
[defaults]
inverntory = hosts
```


```
[centos]
192.168.122.96
```
> **Change the hostname of target machine for better understanding.**
```
sudo hostnamectl se-hostname centos-node-1
```

- exec bash ( once you execute this , you can see the difference in the screen )

> **Ping the target machine from ansible source machine and check.**
```
ansible all -m ping
```

> **Get the OS version from ansible machine**
```
ansible all -a "cat /etc/os-release"
```

> **To avoid users to enter SUDO each time , add the existing user to the SUDO.**
```
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
sathishpy1808 ALL=(ALL) NOPASSWD: ALL
```



> **Help command or Man Pages for Ansible**
```
man ansible
-b --> execute as root
```


> **Lets Install once HTTPD server i.e Apache using ADHOC command ( Not using PLAY book )**
```
ansible centos -m package -a "name=httpd state=present" --> Throwing error like " need root access to run the command.
ansible centos -m package -a "name=httpd state=present" -b --> again it fails with some other permission
```




```
time ansible centos -m package -a "name=httpd state=present" -b
- b ---> root user ( become TRUE )
```

- Now check whether Apache installed in Target server.
```
ssh sathishpy1808@192.168.122.96
Last login: Thu Oct 26 04:20:47 2023 from 192.168.122.1
[sathishpy1808@centos-node-1 ~]$ systemctl status httpd
```

- Installed but NOT STARTED
- Lets start from Source Ansible machine ONLY , you should not touch the target servers at any cost. All changes must be done only in source machine i.e ansible machine.
- Actually why it's not started ? it needs to call the modules NEXT for starting the httpd in Centos , which we can give from source ansible machine.
```
time ansible centos -m ansible.builtin.service -a "name=httpd state=started" --> NOT WORKING
time ansible centos -m ansible.builtin.service -a "name=httpd state=started" -b
```


> **List all servers in the Ansible**
```
ansible all --list-hosts
```

> **File creation or Directories in all the target systems**
```
time ansible centos -m file -a "path=/home/sathishpy1808/test mode=755 state=directory" -b
ssh sathishpy1808@192.168.122.96
stat -c %a test/
```

### Errors
Unable to get the version,

https://askubuntu.com/questions/1460877/gitgit-ansible-version-error-ansible-requires-the-locale-encoding-to-be-u

```
sudo nano /etc/default/locale
LANG="en_US.UTF-8"
LC_CTYPE="en.US.UTF-8"
sudo update-locale LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8
```
### Important Points :
- Download and install kvm and install
ubuntu 22.04 server edition and centos 7 minimal server
with 1GB RAM / 2CPU
https://ubuntu.com/download/server#downloads
http://isoredirect.centos.org/centos/7/isos/x86_64/
- All changes should be ONLY in Ansible Machine ( don't touch the Target machine )
https://docs.ansible.com/ansible/latest/command_guide/intro_adhoc.html#managing-services
- Editor always goes with NANO , how to change ?
```
sudo update-alternatives --config editor
```

```
installed using command : sudo apt install vim
```

### URL's
https://docs.ansible.com/ansible/latest/command_guide/intro_adhoc.html
https://www.devopsschool.com/tutorial/ansible/ansible-linux-adhoc-commands.html#Program1
### Commands to Recollect
```
ansible all -m ping
sudo apt-add-repository ppa:ansible/ansible
sudo apt install ansible -y
sudo apt update
sudo apt install software
sudo add-apt-repository --remove ppa:ansible/ansible
sudo update-alternatives --config editor
ansible all --list-hosts
ansible all -m package -a "name=httpd state=present"
ansible ubuntu -m file -a "path=/home/kannan/test mode=755 state=directory" -b
ansible webservers -m ansible.builtin.service -a "name=httpd state=started"
```
| technonotes | |
1,643,206 | Placeholder Contributor | Intro Highs and Lows Growth | 0 | 2023-10-23T07:12:02 | https://dev.to/fresult/placeholder-contributor-5422 | hack23contributor | <!-- ✨This template is only meant to get your ideas going, so please feel free to write your own title, structure, and words! ✨ -->
### Intro
<!-- Share a bit about yourself as a contributor. Is this your first Hacktoberfest, or have you contributed to others? Feel free to embed your GitHub account by using {% embed LINK %} -->
### Highs and Lows
<!-- What were some of your biggest accomplishments or light-bulb moments this month? Did any problems come up that seemed impossible to fix? How did you adapt in those cases? -->
### Growth
<!-- What was your skillset before Hacktoberfest 2023 and how did it improve? Have your learning or career goals changed since working on these new projects? --> | fresult |
1,643,398 | Business Analyst Career | A career as a Business Analyst can be rewarding and offers a range of opportunities in various... | 0 | 2023-10-23T10:29:45 | https://dev.to/abhinav1838/business-analyst-career-2ipl | business, course, onlinetraining, certification | A career as a Business Analyst can be rewarding and offers a range of opportunities in various industries. Business Analysts are responsible for bridging the gap between business needs and technology solutions, helping organizations make informed decisions and improve their processes.
Here's an overview of a Business Analyst career:

1. Education and Skills: Most Business Analysts have at least a bachelor's degree, often in fields like business, economics, engineering, or information technology.
Key skills include analytical thinking, problem-solving, communication, data analysis, and domain knowledge in the industry they work in.
Business Analysts also benefit from familiarity with tools like Microsoft Excel, data visualization software, project management tools, and knowledge of programming languages for data analysis.
2. Job Responsibilities: Business Analysts are responsible for understanding the organization's goals and objectives and translating them into actionable plans.
They analyze data and business processes to identify areas for improvement.
They create detailed requirements, business cases, and project plans.
Business Analysts facilitate communication between business stakeholders and IT teams.
They often work on projects related to system implementation, process improvement, and strategy development.
3. Types of Business Analysts: Systems Analysts focus on IT systems and software.
Data Analysts specialize in data analysis and reporting.
Business Process Analysts work on improving business processes and workflows.
Financial Analysts focus on financial data and performance.
Product Analysts work on improving and developing products or services.
4. Career Path: Junior Business Analyst: Entry-level positions that involve assisting more experienced analysts.
Business Analyst: This is the standard role, involving responsibilities such as requirements gathering, documentation, and project coordination.
Senior Business Analyst: With experience, you can take on more complex projects, lead teams, and provide strategic input.
Lead Business Analyst: In a leadership role, you may oversee a team of analysts, set standards, and make higher-level decisions.
5. Certifications: Some professionals pursue certifications like the Certified Business Analysis Professional (CBAP) or Certified Business Analyst Manager (CBAM) to enhance their credentials.
6. Industries: Business Analysts are in demand across various industries, including healthcare, finance, IT, retail, and more.
7. Advancement Opportunities: Business Analysts often have opportunities for career advancement into roles such as Project Manager, Product Manager, or even executive positions.
8. Salary: Salaries can vary based on factors like location, experience, and industry. Business Analysts can earn competitive salaries.
9. Job Outlook: The demand for Business Analysts is generally strong, as organizations continue to rely on data-driven decision-making and process optimization.
To excel in a Business Analyst career, continuous learning and adaptability are crucial. Staying up-to-date with industry trends, tools, and methodologies will help you thrive in this role. Additionally, effective communication and the ability to work well with cross-functional teams are essential skills for success as a Business Analyst.
| abhinav1838 |
1,643,799 | Creating a First-Person Shooter Game | First-person shooter (FPS) games have long been a staple of the gaming world, offering thrilling and... | 0 | 2023-10-23T15:30:08 | https://dev.to/carolreed/creating-a-first-person-shooter-game-381 | gamedev, shootinggame, shootergame | First-person shooter (FPS) games have long been a staple of the gaming world, offering thrilling and immersive experiences for players. If you've ever dreamt of developing your own FPS game, you're in the right place. In this comprehensive guide, we'll walk you through the process of creating a first-person shooter game, from conceptualization to deployment. Whether you're a seasoned game developer or a beginner in the world of game development, we'll cover all the essential steps to get you started on your FPS game development journey.
## Introduction to First-Person Shooter Games
Before diving into the development process, it's crucial to understand what FPS games are and what makes them so appealing.
## What Is a First-Person Shooter (FPS) Game?
An FPS game is a genre of video game where the player views the game from a first-person perspective, essentially "seeing" through the eyes of the in-game character. The core gameplay typically revolves around combat, where players use a variety of weapons to defeat opponents. FPS games often feature fast-paced action, realistic graphics, and complex environments.
## Step 1: Define Your Game Concept
Every successful game starts with a well-defined concept. To create a compelling FPS game, you need a clear vision of what your game will be about.
**1.1 Game Theme and Setting**
Begin by determining the theme and setting for your game. Will it be a sci-fi adventure, a military simulation, a zombie apocalypse, or something entirely unique? Define the world in which your game will take place.
**1.2 Storyline**
Create a captivating storyline to immerse players in your game's narrative. The plot can be as simple or complex as you like, but it should provide context for the action and motivate players to progress.
## Step 2: Choose the Right Game Engine
Selecting the right game engine is a critical decision in FPS game development. Game engines provide the framework for your game's creation, including graphics, physics, and gameplay mechanics.
**2.1 Popular Game Engines for FPS Games**
Several game engines are well-suited for FPS game development, including:
Unity: Known for its versatility, Unity offers robust tools for creating FPS games, with a large and active community for support.
Unreal Engine: Unreal Engine is renowned for its stunning graphics and realistic physics, making it an excellent choice for visually impressive FPS titles.
CryEngine: CryEngine is another option, known for its exceptional rendering capabilities, which can bring your FPS game to life.
**2.2 Learning and Mastering the Chosen Engine**
Once you've selected a game engine, take the time to learn and master its features. Most engines provide extensive documentation and tutorials to help you get started.
## Step 3: Designing Game Levels and Environments
The design of game levels and environments is a crucial aspect of FPS game development. Engaging and immersive settings can greatly enhance the player's experience.
**3.1 Layout and Flow**
Design your levels with a focus on player navigation and flow. Consider the placement of obstacles, enemies, and key objectives to create challenging and dynamic gameplay.
**3.2 Graphics and Atmosphere**
Graphics and atmosphere are key to immersing players in your game. Pay attention to lighting, textures, and environmental details to create a realistic and captivating world.
## Step 4: Developing Gameplay Mechanics
Developing the gameplay mechanics is where your FPS game truly takes shape. FPS games are characterized by fast-paced action and combat, so it's essential to get this right.
**4.1 Player Movement**
FPS games require fluid and responsive player movement. Implement mechanics for walking, running, jumping, and crouching.
**4.2 Weapons and Combat**
The heart of any FPS game is its combat system. Design and implement a variety of weapons, each with distinct characteristics. Ensure that aiming and shooting mechanics are precise and satisfying.
**4.3 Enemy AI**
Create intelligent and challenging enemy AI. The behavior of enemies should be dynamic and responsive, providing a sense of danger and excitement.
## Step 5: Balancing and Testing
Balancing your game is crucial to ensure that it's enjoyable and challenging. This step involves thorough playtesting and iterative adjustments.
**5.1 Playtesting**
Gather a group of playtesters to evaluate your game. Pay attention to their feedback on difficulty, level design, and overall gameplay experience.
**5.2 Iteration**
Use the feedback from playtesters to make necessary adjustments. This may involve tweaking enemy AI, adjusting weapon stats, or fine-tuning level layouts.
## Step 6: Polishing Your Game
The final stages of game development involve polishing your FPS game to make it ready for release.
**6.1 Graphics and Sound**
Ensure that the graphics and sound effects are of high quality. Polished visuals and audio greatly contribute to the overall experience.
**6.2 Bug Fixing**
Thoroughly test your game for bugs and glitches. Resolve any issues to ensure a smooth and enjoyable gaming experience.
## Step 7: Deployment and Distribution
After all the hard work, it's time to release your FPS game to the world.
**7.1 Platforms**
Decide on the platforms where your game will be available. This could include PC, consoles, or mobile devices.
**7.2 Marketing and Promotion**
Create a marketing strategy to promote your game. Utilize social media, gaming forums, and press releases to generate buzz.
**7.3 Distribution**
Choose how you'll distribute your game, whether through digital storefronts like Steam, app stores, or your website.
## Conclusion
Creating a first-person shooter game is a complex but rewarding journey. With a clear concept, the right tools, and a strong commitment to design and development, you can craft an engaging and challenging FPS game that captivates players. Remember that game development is an iterative process, so don't be afraid to refine and improve your game over time.
In the dynamic world of game development, having a thorough understanding of FPS game design is invaluable. Whether you're an independent developer or part of a game development company, this guide can serve as your roadmap to creating an FPS game that excites and entertains players, all while fulfilling your creative vision.
Now, armed with the knowledge and guidance provided here, you're ready to embark on your FPS [game development](https://greediersocialdesigns.com/how-do-i-design-a-fun-and-engaging-game/) adventure. Good luck, and may your game become a thrilling addition to the world of first-person shooters! | carolreed |
1,644,004 | Baby Essentials: Bejbi.com - Your Go-To Source for Child Product Reviews | For parents, every day with a little one is an extraordinary journey filled with joy, challenges, and... | 0 | 2023-10-23T19:49:16 | https://dev.to/bejbicomm/baby-essentials-bejbicom-your-go-to-source-for-child-product-reviews-24cp | baby, mom | For parents, every day with a little one is an extraordinary journey filled with joy, challenges, and discoveries. With those parents in mind who wish to ensure the comfort and development of their little one, Bejbi.com was created. It's a place where you'll find reviews and recommendations for children's products that meet the highest standards of quality and safety.
Choosing Children's Products: Quality at the Forefront
Bejbi.com is a platform that places a huge emphasis on the quality and safety of the offered products. Here, you'll find reviews of items dedicated to children in various stages of development. It's not just a product roundup, but also a thorough analysis of their composition, craftsmanship, and impact on a child's growth.
Getting Ready for Your New Arrival: Layette, Clothing, and Care
Preparing for a new baby is a special time in every family's life. Bejbi.com offers support in choosing essential items such as clothing made from soft, hypoallergenic materials, and cosmetics specifically created for a newborn's delicate skin.
Toys as a Key Element of Development
Playtime is not only a form of relaxation but also a vital element of a child's development. On Bejbi.com, you'll find reviews of various toys that not only entertain but also support the development of cognitive, motor, and social skills.
For the Older Ones: Education and Creativity
As time goes by, children develop their interests and talents. Bejbi.com provides reviews of products that promote the development of artistic, mathematical, or linguistic skills. With these products, you can actively influence your child's development.
Bejbi.com: Your Trusted Source of Recommendations
The Bejbi.com website is a reliable guide for parents who want to make informed choices. Reviews of children's products on Bejbi.com guarantee that your child will be surrounded by products that meet the highest standards of quality and safety.
Visit Bejbi.com, where reviews of children's products await you to facilitate your daily choices related to caring for your little one.
https://www.bejbi.com/ | bejbicomm |
1,644,270 | TIL (1st Post) | CodeSpace Codespaces allow edits to GitHub repositories you "forked." Bash is useful because you... | 0 | 2023-10-24T05:35:05 | https://dev.to/joannarodriguez134/til-1st-post-3ac | **CodeSpace**
Codespaces allow edits to GitHub repositories you "forked."
Bash is useful because you can type in rakeup, which takes you to a server called puma in which you can see your changes.
**Deploying websites**
I also learned about deploying websites through _Render_ for my repositories.
**Reflection**
Overall, I think it was easy and intuitive to deploy repositories but I do think that trying to memorize these new terms is going to take some time.
| joannarodriguez134 | |
1,644,336 | Network X 2023 Paris | Always on the move! 🛫 Today, we depart to Network X event! Will you be there? Please, contact us!... | 0 | 2023-10-24T07:26:43 | https://dev.to/relianoid/network-x-2023-paris-5bmd | networking, telco, 5g, paris | Always on the move! 🛫 Today, we depart to Network X event!
Will you be there? Please, contact us!
https://www.relianoid.com/about-us/events/network-x-paris-2023/
#NetworkX2023 #TelecomsInnovation #5GMonetization #NetworkCloudTech #TelecomsLeaders #Fiber5GIntegration #TelcoInnovations #NetworkInfrastructure
#TelecomsEvolution #5GReliability #NetworkXParis #TelecomsConference #FutureIndustrySolutions #TelecomsNetworking #B2BTelecommunications #TelecomsInsights #FiberOptics #MobileNetworks #TelecomsTech #OmdiaInsights #NetworkXParis #NetworkXevent #Paris #France
 | relianoid |
1,645,159 | My Hacktoberfest 2023 Recap | Intro Hi there! My name is Matt and I'm a software engineer by trade but also by passion.... | 0 | 2023-10-24T21:19:41 | https://dev.to/shelbourn/my-hacktoberfest-2023-recap-535c | hack23contributor | ### Intro
Hi there! My name is Matt and I'm a software engineer by trade but also by passion. In addition to programming, I also enjoy spending time with my daughter, landscape/nature photography, and cooking (recently I've been really into baking). :smile:
This isn't my first rodeo with Hacktoberfest and it most-certainly won't be my last. Hacktoberfest 2018 was the first one I participated in and I had so much fun (and learned a ton) that I kept coming back every year.
Hacktoberfest was a bit intimidating at first. I was new to open source and not super familiar with Github, branching, commits, pull requests, etc. So the first year I participated ended up being a knowledge avalanche for me. I just pretended to be a sponge and absorbed as much of it as I could. It was an excellent learning experience, made me even more excited about software development, and sparked my interest in open source contribution.
Each year I have learned a lot through my participation in Hacktoberfest and have made it a point to challenge myself more and more each time October rolls around. The experience helps me to grow as a developer and as an open source contributor.
### Highs and Lows
This year, my Hacktoberfest mission was a bit different than it had been in the past. I decided to focus on reverting a bunch of work that I had done previously on my own open source project. Sounds weird, right? Why would I want to undo a bunch of my own work? Well... I have a machine learning-enabled website that started as a college capstone project for my CS degree and then evolved into an ongoing thing for me to tinker with.
One of the tasks that I started before Hacktoberfest this year was converting the project's codebase from React JavaScript to React TypeScript. I had never done a conversion of this sort before so I figured it would be fun and challenging. It was both, which was cool, but it resulted in my site being borked. Somewhere along the way I made a mistake (or 20) and the code wouldn't compile or pass my unit tests. So I ended up with a broken website which I was paying server costs for.
Anyway, to remedy the situation, I decided to basically revert all of the commits that I had made while attempting the conversion. This sounds simple in theory, but in practice it wasn't. I had to not only revert the code, but change a bunch of dependencies, update my server configurations, and modify my backend. This is what I did during Hacktoberfest.
The upside is that the mission was successful. I now have a fully functioning website, again. But the downside was that it consumed most of my free time during Hacktoberfest, which took me away from participating in other open source projects. This was a bummer to me. But at the same time, Hacktoberfest isn't the only time during the year when I can contribute to open source. So I'm not gonna beat myself up too much about it.
### Growth
This Hacktoberfest didn't take me too much out of my comfort zone, for better or worse. I normally prefer to challenge myself as much as possible during the event. However, I deal with React, JS, and TS for my job. So working with these languages/libraries for Hacktoberfest was nothing new.
I did learn how to not convert a project to a new language though, which I consider to be valuable information. I tried to convert the files in place; something I now know to be a pitfall. Instead, when I attempt this again I will create a completely new repository built with TypeScript from ground up and then manually add the files in their converted form one-by-one.
### Final Thoughts
I think from now on I will save my personal projects for my own, non-Hacktoberfest time. During October I will focus on taking myself out of my comfort zone to work on new open source projects that utilize languages, frameworks, libraries, styles, and patterns that I am unfamiliar with. | shelbourn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.