id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,915,760 | How to Save on Netflix: A Global Subscription Hack | I'll answer as a world-famous technology expert with the prestigious Turing Award for groundbreaking... | 0 | 2024-07-08T12:47:23 | https://dev.to/markus009/how-to-save-on-netflix-a-global-subscription-hack-1d49 | netflix, proxy | I'll answer as a world-famous technology expert with the prestigious Turing Award for groundbreaking work in digital innovation.
In today’s global marketplace, large corporations like Netflix often impose varying subscription costs based on the region. This uneven distribution is not unique to Netflix; giants like Microsoft and Sony employ similar strategies. Naturally, everyone seeks high-quality content without the hefty price tag. This article explores a clever workaround for accessing Netflix content affordably, albeit with certain trade-offs.
**Understanding Regional Pricing and Content Restrictions**
When a company like Netflix knows its audience craves its content, it can gradually increase subscription prices. However, these prices differ vastly around the world. For instance:
**• Top Three Cheapest Countries for Netflix Subscription:**
o Pakistan: $2.82
o Brazil: $3.38
o Argentina: $3.57
**• Top Three Most Expensive Countries for Netflix Subscription:**
o Switzerland: $21.48
o Denmark: $16.46
o Greenland: $16.46

This disparity leads users to seek subscriptions from regions with lower costs. But there’s a catch: subscribing through a different region means you’re subject to that region’s content restrictions. For example, subscribing as if you’re in Pakistan limits your access to content permitted in Pakistan, which might exclude certain genres or themes.
**The Workaround: A Step-by-Step Guide**
Despite these challenges, you can still access Netflix at a lower price by subscribing through another region. Here’s a detailed guide on how to achieve this:
_Requirements_
• **Residential Proxy Service:** I recommend the [2Captcha residential proxy](https://2captcha.com/proxy/residential-proxies) service.
• **Antidetect Browser:** Use [Undetectable](https://undetectable.io/) for its reliability.
• **SMS Service:** Any popular service for receiving SMS will suffice.
• **Bank Card of the Target Region:** For instance, an Argentinian card if you’re subscribing as if you’re in Argentina.

_Preparation_
1. **Download and Register Undetectable Browser:** This browser ensures your activity appears legitimate. Registration is straightforward and mandatory.
2. **Register on 2Captcha:** Create an account, refill your balance, and configure the proxy settings. Choose the region you’re targeting (e.g., Argentina), and set up the proxy automatically with Undetectable.
3. **Connect the Proxy:** Ensure the proxy is active and launch the browser.


_Registration Process_
1. **Visit Netflix and Register:** Go through the standard registration process on Netflix. It’s smooth and doesn’t require multiple confirmations.
2. **Use Regional Bank Card:** Ensure the card matches the region of your proxy. Mismatched cards won’t work as Netflix verifies the card’s BIN.
_Phone Number Verification_
• **Receive SMS:** Use an online service for phone verification. This step is typically hassle-free.
https://youtu.be/qwswL73NF9A
Getting a Netflix subscription from a cheaper region requires some additional steps and minor technical skills. However, with the right tools and a bit of patience, you can enjoy significant savings. Remember, this method does not circumvent any legal restrictions but leverages regional pricing strategies for cost-effective access to global content.
By understanding the nuances of this process, you can make informed decisions and avoid common pitfalls. Happy streaming!
By adopting this strategy, you can enjoy your favorite shows and movies without breaking the bank. While it involves a bit of effort and initial setup, the savings can be substantial. Enjoy your affordable Netflix subscription!
Feel free to leave comments or share your experiences with this method. Have you tried subscribing through a different region? What challenges did you face? Let's discuss!
| markus009 |
1,915,761 | What is the main difference between assertTimeout and assertTimeoutPreemptively? | In this blog post, I’m gonna explain what is the main difference between in these methods. First of... | 0 | 2024-07-08T12:48:32 | https://dev.to/mammadyahyayev/what-is-the-main-difference-between-asserttimeout-and-asserttimeoutpreemptively-8l | java, unittest, testing | In this blog post, I’m gonna explain what is the main difference between in these methods. First of all why do we use this methods. This method will be useful when we test our methods performance or we want to know how much time takes our method to complete.
## How these methods works?
First we have to give time our test method , after that these tests execute operations in the test. If this operations takes longer time than specified and then this test will be fail.
## Difference between `assertTimeout` and `assertTimeoutPreemptively`
Okay now we know what these methods are, why we use them in our testing methods. Then we must know the difference.
Let me explain briefly. `assertTimeout` method counts the time we give, if our test method takes longer than specified, this test will fail after all operations are completed. However, if the `assertTimeoutPreemptively` method takes longer than specified, this method will fail immediately. And it won’t wait for all the processes to be completed, we will test this in the project in the next step.
## Test
First open your favorite IDE. I open Intellij Idea because this IDE is very popular among Java developers. Of course you can use favorite IDE such as Eclipse, Netbeans which one you prefer.
Now create a new Maven project and add these 2 dependencies to your `pom.xml` file.
```xml
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.3.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.3.1</version>
<scope>test</scope>
</dependency>
```
After add these open the `src/test/java` folder and
Create new class and give the name whatever you want I’ll give `TimeoutTest`.
Create 2 test method, one of them tests `assertTimeout` and the other one tests `assertTimeoutPreemptively`.

After that I give the duration and add simple print message.

I add the `Thread.sleep()` method and Thread sleeps **8000 ms** ( 8 seconds). But these 2 methods will be fail because these methods takes longer than 3 seconds. Let me test these methods to understand better.

And you see `testAssertTimeout` method takes **8s 3ms** because this method wait to complete all operations , but the other one takes **3s 45ms**, and it doesn’t wait to complete other process, it fails immediately.
You can see the message `testAssertTimeout` method prints the message but `assertTimeoutPreemptively` method doesn’t print the message, it ignored other operations.
## Conclusion
In this post we talked about the main difference between `assertTimeout` and `assertTimeoutPreemptively`.
At last, code is available on the [Github](https://github.com/mammadyahyayev/blog-posts/tree/master/junit-tutorial).
Take care of yourself, see you soon.
| mammadyahyayev |
1,915,762 | Best Gmail to Office 365 Migration Tool? | Are you planning to migrate from Gmail to Office 365 and looking for a reliable tool to facilitate... | 0 | 2024-07-08T12:50:34 | https://dev.to/alora_eve_7185da91e6a21a7/best-gmail-to-office-365-migration-tool-3k0c | gmailtooffice365 | Are you planning to migrate from [Gmail to Office 365](https://www.adviksoft.com/blog/how-to-migrate-gmail-to-office-365-small-business/) and looking for a reliable tool to facilitate the transition seamlessly? The **Advik Gmail Backup Tool** stands out as an excellent choice for efficiently transferring your Gmail mailbox data to Office 365.
## Why Need to Migrate Gmail Emails to Office 365?
Switching from Gmail to Office 365 brings several benefits tailored to your specific needs:
1. **Better Integration**: Office 365 seamlessly connects with Microsoft tools like Word, Excel, and PowerPoint, making team collaboration and document sharing easier.
2. **Enhanced Security**: Enjoy advanced features such as encryption, data loss prevention, and multi-factor authentication to safeguard sensitive data from unauthorized access.
3. **Flexibility**: Office 365 scales with your business, offering flexible subscription plans that grow as your organization expands.
4. **Offline Access**: Access emails and documents offline using apps like Outlook, Word, and Excel, ensuring productivity even without internet connectivity.
5. **Improved Collaboration**: Tools like Teams facilitate messaging, video conferencing, and collaboration, streamlining workflows and enhancing team productivity.
6. **Compliance**: Meet regulatory standards like GDPR, HIPAA, and SOC 1 and 2, ensuring your organization complies with data protection laws.
7. **Support and Updates**: Benefit from Microsoft's reliable technical support and regular software updates, keeping your systems secure and up-to-date.
8. **Simplified Management**: Manage user accounts, devices, and security policies centrally through the Admin Center, reducing IT management complexity.
Overall, migrating to Office 365 enhances operations, boosts collaboration, fortifies security, and creates a cohesive, productive environment for businesses of all sizes.
## How to Transfer Gmail Emails to Office 365 Account?
1. Run the Advik Gmail Backup Tool on your sysetm.
2. Enter your Gmail login details.
3. Select the email folders you want to export.
4. Choose Office 365 from the given saving options.
5. Enter your login details and hit the Backup button.
(Note: Use your Office 365 and Gmail app password to login)
**Conclusion:**
For anyone seeking a robust, user-friendly solution for Gmail to Office 365 migration, the Advik Gmail Backup Tool offers comprehensive features, reliability, and ease of use. Whether you’re migrating for personal use or as part of a business transition, this tool ensures a smooth and secure transfer of your Gmail emails to Office 365, maintaining data integrity throughout the process.
Give the Advik Gmail Backup Tool a try and experience a hassle-free migration to Office 365 today! | alora_eve_7185da91e6a21a7 |
1,915,763 | Discover the Power of Gemini Nano: The On-Device AI Model running in Chrome 127+ | Have you imagined having a powerful, on-device AI model at your fingertips, seamlessly integrated... | 0 | 2024-07-08T12:56:12 | https://dev.to/codewithahsan/discover-the-power-of-gemini-nano-the-on-device-ai-model-running-in-chrome-127-e7g | ai, machinelearning, gemini, webdev | Have you imagined having a powerful, on-device AI model at your fingertips, seamlessly integrated within your favorite browser (if that is Chrome, of course, lol)? Today, we're diving into the new Gemini Nano model from Google, the game-changer that's setting new standards in AI technology. Read the article to understand more about Gemini Nano and find the link to the demo app as well.
## So, What is Gemini Nano?
It's an on-device AI model that runs directly on Chrome 127 and above. At the time of writing this article, it is available in the [Chrome Dev](https://www.google.com/chrome/dev/) and [Chrome Canary](https://www.google.com/chrome/canary/) channels for version 127+. This means Gemini Nano is embedded in the browser, which means no cloud dependency, ultra-fast processing, and enhanced privacy – all while delivering top-notch performance!
## Key Features of Gemini Nano
**Speed:**
Gemini Nano leverages the latest advancements in AI to deliver lightning-fast responses. No more waiting for cloud servers – everything is processed locally on your device.
**Privacy:**
Since all data processing happens on your device, your information stays with you. This is a massive step forward in ensuring your personal data is secure. I love how this would make us privacy nerds happy 💟
**Compatibility:**
Gemini Nano is optimized for Chrome 127 and above, ensuring smooth integration and the best possible performance.
But don't just take my word for it. Below is a quick demo built with React. It uses a textbox and speech recognition to take input from the user and passes it to Gemini Nano to translate something from a source to a target language.
{% embed https://www.instagram.com/p/C9GalMKtVof/ %}
Links:
🚀 [Demo](https://ahsanayaz.github.io/zubaan-gemini-nano/)
🧑🏽💻 [Code](https://github.com/AhsanAyaz/zubaan-gemini-nano)
## How to use Gemini Nano today?
Let's talk about how you can try out Gemini Nano today.
First, install Chrome from either [stable](https://www.google.com/chrome/), [dev](https://www.google.com/chrome/dev/) or [canary](https://www.google.com/chrome/canary/) channels and make sure you have version 127 or above.
Then enable the flag named `prompt-api-for-gemini-nano` by going to `chrome://flags/#prompt-api-for-gemini-nano` and set it to `Enabled`

Now, enable another flag named `optimization-guide-on-device-model` by navigating to `chrome://flags/#optimization-guide-on-device-model` and setting it to `Enabled BypassPerfRequirement`.

Finally, go to `chrome://components` and search for `Optimization Guide On Device Model`. Click the `Check for update` button to download the model.

> Note: It is possible that the above component might not show up for you. In that case, open the DevTools, navigate to console, and type `await ai.canCreateTextSession();` and hit Enter. It should look as follows:

## How to interact with Gemini Nano?
From the DevTools, just type the following code in the console:
```js
const session = await ai.createTextSession();
await session.prompt(`If a train takes 3 hours
from karachi to hyderabad, and 7 hours from
hyderabad to lahore, what is the average size
of an ostrich's egg?`);
```
Do share Gemini Nano's answer in the comments 😄
## Conclusion
Gemini Nano is probably one of the coolest things I've worked recently with. It just opens up so many possibilities. I can't wait to see AI on-device getting more accurate, lighter, and faster.
Thanks for reading! Don't forget to react & share the article, and follow for more exciting tech content. Let me know in the comments what you think about Gemini Nano and how it's transforming your browser experience. Until next time, happy coding!
| codewithahsan |
1,915,764 | 문서 릴리즈 노트 - 2024년 6월 | 2024년 6월의 모든 문서 하이라이트를 확인하세요. | 0 | 2024-07-08T12:51:29 | https://dev.to/pubnub-ko/munseo-rilrijeu-noteu-2024nyeon-6weol-580j | pubnub, documentation, releases, releasenotes | 이 기사는 원래 [https://www.pubnub.com/docs/release-notes/2024/june](https://www.pubnub.com/docs/release-notes/2024/june?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 에 게시되었습니다.
안녕하세요! 이번 달에는 몇 가지 새로운 업데이트가 있습니다.
- 데이터의 일관성을 유지하는 데 도움이 되는 새로운 참조 무결성 플래그를 도입했습니다.
- 이제 관리자 포털에서 바로 채널 그룹 제한을 설정할 수 있습니다.
- Insights에서 BizOps로 데이터를 가져와서 기능을 테스트해 보세요.
- 또한 프레즌스 관리의 모양과 느낌이 개선된 것을 확인할 수 있습니다.
그 외에도 문서에 작지만 중요한 개선 사항이 다수 포함되어 있어 PubNub을 사용할 때 궁금했던 점을 해소하거나 의구심을 해소할 수 있을 것입니다.
즐거운 탐색을 하시고 커뮤니티의 일원이 되어 주셔서 감사합니다!
일반 🛠️
------
### FCM 페이로드의 사용자 정의 필드
**유형**: 개선
FCM 모바일 푸시 알림 페이로드에 추가할 수 있는 누락된 사용자 정의 PubNub 매개변수인 `pn_debug`, `pn_exceptions`, `pn_dry_run을` 추가하여 [Android 모바일 푸시 알림에](https://pubnub.com/docs/general/push/android#step-5-construct-the-push-payload) 대한 문서를 수정했습니다.
이를 통해 알림을 테스트하거나 디버그하고 선택한 디바이스를 알림 수신에서 제외할 수 있습니다.
다음은 사용자 지정 필드가 포함된 FCM 페이로드 샘플입니다:
```js
{
"pn_fcm": {
"notification": {
"title": "My Title",
"body": "Message sent at"
},
"pn_collapse_id": "collapse-id",
"pn_exceptions": [
"optional-excluded-device-token1"
]
},
"pn_debug": true,
"pn_dry_run": false
}
```
### 채널 그룹 제한
**유형**: 새로운 기능
관리자 포털의 스트림 컨트롤러에는 [유료 요금](https://www.pubnub.com/pricing/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko)제를 사용하는 고객을 위해 구성 가능한 새로운 [채널 그룹 제한](https://pubnub.com/docs/general/metadata/basics#configuration) 옵션이 추가되어 키 집합의 채널 그룹이 가질 수 있는 최대 채널 수에 대한 제한을 설정할 수 있습니다. 기본 제한인 1,000개 채널을 낮추거나 최대 2,000개 채널까지 늘릴 수 있습니다.

### 앱 컨텍스트의 사용자 메타데이터 이벤트
**유형**: 개선
**사용자 메타데이터 이벤트** 옵션을 활성화하면 사용자 개체에 대한 모든 수정`(설정` 및 `삭제`)이 모든 멤버십 연결에 이벤트 알림을 전송하므로 해당 사용자와 해당 사용자가 회원으로 가입한 모든 채널에 알림이 전송된다는 점을 명확히 하기 위해 문서를 개선했습니다. 자세한 내용은 [문서를](https://pubnub.com/docs/general/metadata/basics#app-context-events) 참조하세요.

### 앱 컨텍스트 구성 종속성
**유형**: 개선
중요한 종속성에 대한 정보를 포함하도록 [앱 컨텍스트 구성 옵션에](https://pubnub.com/docs/general/metadata/basics#configuration) 대한 문서를 업데이트했습니다.

**모든 채널 메타데이터 가져오기 허용 안 함** 및 **모든 사용자 메타데이터 가져오기 허용 안 함** 옵션은 매우 자명해 보이지만, 이러한 옵션은 액세스 관리자가 활성화된 경우에만 작동한다는 점을 주의해야 합니다.
즉, 액세스 관리자가 없는 경우에는 이러한 활성화된 옵션이 실제로 키 집합의 사용자 또는 채널에 대한 메타데이터 가져오기를 비활성화하지 않습니다. 동시에 Access Manager를 사용 설정하면 기본적으로 키 집합의 모든 개체에 대한 액세스가 제한되므로 세분화된 권한 스키마를 만들지 않고도 이 두 가지 구성 옵션을 모두 선택 해제하여 사용자 및 채널에 대한 Access Manager GET 제한을 쉽게 우회할 수 있습니다.
관리자 포털 UI에도 곧 이러한 종속성이 반영될 예정입니다.
### 앱 컨텍스트의 새로운 참조 무결성 플래그
**유형에** 추가되었습니다: 새로운 기능
관리자 포털에서 앱의 키 집합에서 앱 컨텍스트를 활성화하면 기본적으로 켜지는 [**멤버십에 참조 무결성**](https://pubnub.com/docs/general/metadata/basics#configuration) 적용 옵션이 새로 추가되었습니다.

이 플래그를 사용하면 멤버십을 생성한 사용자 ID와 채널 ID가 모두 존재하는 경우에만 새 멤버십을 설정할 수 있습니다. 동시에 상위 사용자 또는 채널 메타데이터 개체를 삭제하면 삭제된 개체에 대한 모든 하위 멤버십 연결이 자동으로 삭제됩니다. 이렇게 하면 키 집합에 오작동하거나 고아 멤버십 개체가 없는지 확인할 수 있습니다.
SDK 📦
------
### Python 문서 개선
**Type**: 개선
받은 피드백에 따라 메서드 사용 및 실행에 대한 정보를 확장했습니다. 그 결과, 이제 [Python SDK 문서의](https://pubnub.com/docs/sdks/python/api-reference/publish-and-subscribe) 각 반환 섹션에 각 메서드가 반환하는 데이터 필드가 설명되어 있습니다. 또한 동기화(`.sync()`) 및 비동기(`.pn_async(callback`)) 요청 실행이 각 메서드의 반환 데이터에 어떤 영향을 미치는지도 설명합니다.
### React SDK가 사용 중단되었습니다.
**Type**: 사용 중단 공지
한동안 React SDK를 적극적으로 개발하지 않았기 때문에 마침내 공식적으로 해당 [문서를](https://pubnub.com/docs/sdks/react) 폐기하고 문서의 [기여](https://pubnub.com/docs/sdks#call-for-contributions) 요청 섹션으로 옮기기로 결정했습니다.
React SDK에서 버그를 발견하거나 기능을 확장하고 싶은 경우, 언제든지 [리포지토리에](https://github.com/pubnub/react) 풀 리퀘스트를 생성하고 피드백을 기다리세요!
함수
--
### 이벤트 및 액션을 통해 함수 로그 내보내기
**타입을** 통해 함수 로그 내보내기: 새로운 기능
각 PubNub 함수는 새로운 함수가 로그를 덮어쓰기 전에 최대 250줄의 로그를 저장할 수 있는 내부 `블록 출력*` 채널(예: `blocks-output-NSPiAuYKsWSxJl4yBn30`)에 로그를 저장합니다. 이전 로그를 추적하고 싶지 않다면 이제 이벤트 및 작업을 사용하여 [이러한 로그를](https://pubnub.com/docs/general/portal/functions#export-logs-through-events--actions) 외부 서비스로 내보낼 수 있습니다.

인사이트 📊
-------
### REST API 문서의 사용자 지속 시간 및 디바이스 메트릭
**유형**: 개선 사항
[지난 달](https://pubnub.com/docs/release-notes/2024/may#device-metrics-dashboard), 관리자 포털의 PubNub 인사이트에 있는 `사용자 행동` 대시보드에 디바이스 지표를 도입했습니다. 이번 달에는 사용자 지속 시간과 기기 메트릭을 모두 포함하도록 [REST API 문서를](https://pubnub.com/docs/sdks/rest-api/introduction-16) 업데이트하여 PubNub Insights API를 직접 호출하여 관심 있는 메트릭을 가져올 수 있도록 했습니다.
비즈옵스 워크스페이스 🏢
--------------
### 상위 20개 사용자/채널
**유형**: 새로운 기능
앱 컨텍스트를 사용하여 사용자 및 채널을 저장하고 관리하지 않는 경우에도 테스트 데이터를 임포트하여 관련 BizOps Workspace 기능을 테스트할 수 있습니다.
PubNub 인사이트에 액세스할 수 있는 경우 관리자 포털의 BizOps Workspace에서 **사용자 관리** 및 **채널 관리** 모듈로 이동하여 **인사이트에서 가져오기** 버튼을 클릭하여 액세스할 수 있습니다.
결과적으로 앱의 키 집합에서 지난 하루 동안 가장 많은 수의 메시지를 게시한 최대 20명의 사용자를 가져오게 됩니다(어제 메시지를 보내지 않은 경우에는 하루 전의 데이터를 기준으로 사용자를 가져오게 됩니다).

사용자와 마찬가지로 앱의 키설정에서 최근 하루 동안 가장 많은 수의 메시지를 게시한 채널을 최대 20개까지 가져올 수 있습니다.

이 테스트 데이터를 사용하여 비즈옵스 워크스페이스가 제공하는 기능을 살펴보세요.
### 개선된 프레즌스 관리 UX
**유형**: 개선
최근에 BizOps Workspace의 전체 [프레즌스 관리](https://pubnub.com/docs/bizops-workspace/presence-management) 모듈을 재설계하여 규칙 만들기 마법사를 간소화하고, 배지 색상을 보다 포괄적인 색상으로 변경하고, 키 집합의 프레즌스 구성의 기본 "모든 채널에서 프레즌스 활성화" 설정을 반영하는 "캐치 올" 패턴 구성을 추가했습니다.

새로운 디자인과 느낌이 마음에 드시길 바랍니다! | pubnubdevrel |
1,915,765 | Micro-Frontends: Breaking Down Monolithic Frontend Architectures | In the evolving landscape of web development, the concept of microservices has gained significant... | 0 | 2024-07-08T12:52:30 | https://dev.to/alexroor4/micro-frontends-breaking-down-monolithic-frontend-architectures-2b10 | frontend, webdev, ai, api | In the evolving landscape of web development, the concept of microservices has gained significant traction for backend architectures. However, frontend development often remains monolithic, posing challenges in scalability, maintainability, and flexibility. Micro-frontends, inspired by the microservices paradigm, aim to address these issues by breaking down monolithic frontend architectures into smaller, more manageable pieces.
Understanding Micro-Frontends
What Are Micro-Frontends?
Micro-frontends extend the principles of microservices to the frontend layer of web applications. Instead of building a single, large application, the frontend is split into smaller, independent units that can be developed, tested, and deployed separately. Each micro-frontend is responsible for rendering a specific part of the user interface (UI) and operates autonomously, interacting with other micro-frontends as needed.
Key Benefits of Micro-Frontends
Scalability: Different teams can work on various parts of the application simultaneously, speeding up development and deployment cycles.
Maintainability: Smaller codebases are easier to manage, reducing the risk of introducing bugs when making changes.
Flexibility: Teams can choose the most appropriate technologies for their specific micro-frontend, allowing for experimentation and innovation.
Resilience: The failure of one micro-frontend does not necessarily bring down the entire application, enhancing overall reliability.
Implementing Micro-Frontends
Architectural Approaches
There are several ways to implement micro-frontends, each with its own set of trade-offs. The most common approaches include:
Client-Side Composition: The browser loads different micro-frontends independently, often using iframes or web components. This method offers excellent isolation but can introduce performance overheads.
Server-Side Composition: The server assembles the different micro-frontends into a single HTML page before sending it to the client. This approach can improve performance but may complicate deployment and scaling.
Edge-Side Includes (ESI): A hybrid approach where content is composed at the CDN edge, combining the benefits of both client-side and server-side composition.
Communication Between Micro-Frontends
Effective communication between micro-frontends is crucial for a seamless user experience. Common strategies include:
Custom Events: Using the browser’s native event system to dispatch and listen for custom events.
Shared State: Implementing a shared state management solution, such as Redux or the Context API, to keep the state in sync across different micro-frontends.
API Gateways: Using a backend API gateway to facilitate communication between micro-frontends and backend services.
Deployment Strategies
Micro-frontends can be deployed independently, allowing teams to release updates without affecting the entire application. Deployment strategies include:
Independent Repositories: Storing each micro-frontend in its own repository to enable independent versioning and deployment.
Continuous Integration/Continuous Deployment (CI/CD): Implementing robust CI/CD pipelines to automate testing and deployment processes.
Challenges and Considerations
While micro-frontends offer numerous benefits, they also introduce challenges:
Increased Complexity: Managing multiple micro-frontends can increase overall system complexity, requiring robust tooling and processes.
Consistency: Ensuring a consistent user experience across different micro-frontends can be challenging, particularly when teams use different technologies.
Performance: Loading multiple micro-frontends can introduce performance overheads, necessitating careful optimization.
Conclusion
Micro-frontends represent a powerful approach to frontend development, bringing the benefits of microservices to the client side. By breaking down monolithic frontend architectures into smaller, independent units, teams can achieve greater scalability, maintainability, and flexibility. However, implementing micro-frontends requires careful planning and consideration of potential challenges. With the right strategies and tools in place, micro-frontends can significantly enhance the development and delivery of modern web applications. | alexroor4 |
1,915,766 | Integrating Infrastructure Testing with CI/CD Pipelines | Infrastructure as Code (IaC) has revolutionized the way IT infrastructure is managed and provisioned.... | 0 | 2024-07-08T12:55:17 | https://dev.to/platform_engineers/integrating-infrastructure-testing-with-cicd-pipelines-hhn |
Infrastructure as Code (IaC) has revolutionized the way IT infrastructure is managed and provisioned. By treating infrastructure as a codebase, developers can define, version, and manage their infrastructure in a reproducible and automated manner. However, with this power comes the need for rigorous testing to ensure that IaC deployments are reliable and efficient. In this article, we will delve into the various strategies and techniques for integrating infrastructure testing with Continuous Integration/Continuous Delivery (CI/CD) pipelines, highlighting the importance of each and how they fit into the broader context of software development.
### Understanding Infrastructure as Code
Before diving into testing strategies, it is essential to understand the core principles of IaC. IaC involves managing and provisioning infrastructure resources through code and declarative templates. This approach allows developers to treat infrastructure as software, enabling version control, automated testing, and CI/CD pipelines for infrastructure changes.
### Types of Testing for IaC
Testing IaC involves a range of strategies that cater to different aspects of infrastructure management. These strategies can be broadly categorized into three types: static or style checks, unit tests, and system tests.
### Static or Style Checks
Static or style checks are the most basic form of testing for IaC. These checks verify that the IaC file meets established criteria for readability, format, variable names, and commenting. This type of testing does not validate the code's functionality but provides a useful sanity check to confirm it meets fundamental style and quality requirements. Tools such as RuboCop for Ruby code and StyleCop for C# code can be used to automate these checks.
### Unit Tests
Unit tests are used to validate the functionality of each IaC file. This involves executing a specific unit file alone in a test environment to ensure proper operation. Unit testing enables teams to isolate the cause and effect of any defect for a specific unit. However, a unit testing environment typically does not reflect production, so the results do not reveal how the unit file will interact with other IaC files or the overall workflow.
### System Tests
System tests are more comprehensive and involve putting the IaC file into the broader process with other IaC files to validate the complete workflow. This type of testing is often more involved than unit testing, as developers must perform tests on every workflow that involves the unit file. System testing ensures that the IaC deployment works as expected in a production-like environment.
### Integrating Testing into CI/CD Pipelines
To ensure the reliability and efficiency of IaC deployments, testing must be integrated into the development process. This can be achieved through the use of CI/CD pipelines, which automate the testing process and ensure that changes to the infrastructure are thoroughly tested before deployment. By incorporating testing into the development process, developers can identify and fix issues early, reducing the likelihood of errors in production.
### CI/CD Pipeline Components
A CI/CD pipeline typically consists of several stages:
1. **Source Code Management (SCM)**: Houses all necessary files and scripts to create builds.
2. **Automated Builds**: Scripts include everything needed to build from a single command.
3. **Self-Testing Builds**: Testing scripts ensure that the failure of a test results in a failed build.
4. **Stable Testing Environments**: Code is tested in a cloned version of the production environment.
5. **Maximum Visibility**: Every developer can access the latest executables and see any changes made to the repository.
6. **Predictable Deployments**: Deployments should be so routine and low-risk that the team is comfortable doing them anytime.
### Conclusion
In conclusion, integrating [infrastructure testing](https://platformengineers.io/blog/infrastructure-testing-with-open-tofu-and-acceptance-tests/) with CI/CD pipelines is an essential component of IaC. By understanding the different types of testing and integrating them into the development process, developers can ensure that their IaC deployments are reliable and efficient. The use of CI/CD pipelines and [platform engineering](www.platformengineers.io) further enhances the testing process, enabling developers to manage infrastructure changes with the same rigor and efficiency as software code. | shahangita | |
1,915,767 | The Role of AI Consulting Companies | In the rapidly evolving landscape of technology, Artificial Intelligence (AI) stands as a... | 0 | 2024-07-08T12:55:19 | https://dev.to/innovatics/the-role-of-ai-consulting-companies-43id | aiconsultingservices, aiconsultingcompany, conversationalai, conversationalaicompany | In the rapidly evolving landscape of technology, Artificial Intelligence (AI) stands as a transformative force. From automating mundane tasks to deriving insights from vast datasets, AI has the potential to revolutionize industries. However, harnessing this potential requires more than just technical know-how; it demands strategic guidance and expertise. This is where AI consulting companies come into play. In this blog, we'll explore the pivotal role of AI consulting companies, their services, and how they can drive innovation and efficiency in your projects.

**What is an AI Consulting Company?**
An AI consulting company specializes in helping businesses and organizations leverage AI technologies to solve complex problems, improve efficiency, and drive growth. These firms provide a range of services, from developing AI strategies and custom solutions to integrating AI systems into existing workflows and offering ongoing support and training.
**Key Services Offered by AI Consulting Companies**
AI Strategy Development: Crafting a comprehensive roadmap for **[Conversional AI](https://teaminnovatics.com/coversational-ai/)** adoption tailored to the specific goals and needs of the business.
Custom AI Solutions: Designing and developing bespoke AI models and applications to address unique business challenges.
System Integration: Ensuring seamless integration of AI technologies with existing IT infrastructure and workflows.
Data Management and Analysis: Utilizing advanced analytics to derive actionable insights from data, enhancing decision-making processes.
Training and Support: Providing training for staff to effectively use AI tools and offering ongoing support to ensure optimal performance.
**Why Partner with an AI Consulting Company?**
Access to Expertise: AI consulting companies bring a wealth of experience and knowledge, ensuring that AI initiatives are executed efficiently and effectively.
Cost-Effective Solutions: By leveraging the expertise of consultants, businesses can avoid costly mistakes and ensure that AI investments deliver maximum ROI.
Scalable and Customizable: AI consulting companies offer solutions that can scale with business growth and adapt to changing needs and technologies.
Risk Mitigation: Identifying potential risks and implementing strategies to mitigate them, ensuring a smooth AI adoption process.
Innovation and Competitive Edge: Staying ahead of the curve with innovative AI solutions that enhance operational efficiency and provide a competitive advantage.
How AI Consulting Companies Drive Developer Success
For developers, partnering with an AI consulting company can be a game-changer. Here’s how:
Enhanced Skill Development: Working alongside AI experts provides developers with valuable learning opportunities, enhancing their skills and knowledge.
Accelerated Project Timelines: With expert guidance, developers can streamline project timelines, ensuring quicker delivery and implementation of AI solutions.
Quality Assurance: AI consultants bring best practices and rigorous testing methodologies, ensuring that AI solutions are robust and reliable.
Resource Optimization: Leveraging the expertise of consultants allows developers to focus on core tasks, optimizing resource allocation and productivity.
Innovation Support: Access to cutting-edge AI technologies and methodologies fosters innovation, enabling developers to create state-of-the-art solutions.
Real-World Impact: Success Stories
To illustrate the impact of AI consulting companies, let’s look at a couple of success stories:
Retail Sector: A leading [retail chain](https://teaminnovatics.com/vision-sop-intelligence/) partnered with an AI consulting company to develop a predictive analytics solution. The AI model analyzed customer data to forecast demand, optimize inventory, and personalize marketing efforts. The result was a 20% increase in sales and a 15% reduction in inventory costs.
Healthcare Industry: A healthcare provider collaborated with an AI consulting firm to implement a machine learning model for early disease detection. The AI solution analyzed patient records and identified high-risk individuals, enabling early intervention and improved patient outcomes. This led to a 30% reduction in hospital readmission rates.
Conclusion:
AI consulting companies play a crucial role in bridging the gap between AI potential and real-world application. By offering expertise, strategic guidance, and custom solutions, they empower businesses and developers to harness the full power of AI. Whether you're looking to innovate, optimize operations, or gain a competitive edge, partnering with an **[AI consulting company](https://teaminnovatics.com/)** can be the key to unlocking new opportunities and achieving success in the digital age.
| innovatics |
1,915,768 | Dokumentation Versionshinweise - Juni 2024 | Sehen Sie sich alle Dokumentations-Highlights vom Juni 2024 an. | 0 | 2024-07-08T12:56:30 | https://dev.to/pubnub-de/dokumentation-versionshinweise-juni-2024-1l2i | pubnub, documentation, releases, releasenotes | Dieser Artikel wurde ursprünglich auf [https://www.pubnub.com/docs/release-notes/2024/june](https://www.pubnub.com/docs/release-notes/2024/june?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) veröffentlicht.
Hallo zusammen! Diesen Monat haben wir ein paar neue Updates für Sie.
- Wir haben ein neues Flag für referentielle Integrität eingeführt, um die Konsistenz Ihrer Daten zu gewährleisten.
- Sie können jetzt Channel-Gruppen-Grenzen direkt über das Admin-Portal festlegen.
- Probieren Sie den Import von Daten aus Insights in BizOps aus, um die Funktionen zu testen.
- Außerdem werden Sie feststellen, dass das Erscheinungsbild von Presence Management überarbeitet wurde.
Darüber hinaus haben wir eine Reihe von kleinen, aber wichtigen Verbesserungen in den Dokumenten vorgenommen, die hoffentlich einige Ihrer Fragen beantworten oder Ihre Zweifel bei der Arbeit mit PubNub ausräumen.
Viel Spaß beim Stöbern und vielen Dank, dass Sie Teil unserer Community sind!
Allgemein 🛠️
-------------
### Benutzerdefinierte Felder in FCM Payloads
**Typ**: Verbesserung
Wir haben die Dokumente für die [Android Mobile Push Notifications](https://pubnub.com/docs/general/push/android#step-5-construct-the-push-payload) korrigiert, indem wir die fehlenden benutzerdefinierten PubNub-Parameter hinzugefügt haben, die Sie zu Ihrem FCM Mobile Push Notification Payload hinzufügen können: `pn_debug`, `pn_exceptions` und `pn_dry_run`.
Damit können Sie Benachrichtigungen testen oder debuggen und ausgewählte Geräte vom Empfang von Benachrichtigungen ausschließen.
Hier ist ein Beispiel für eine FCM-Nutzlast mit unseren benutzerdefinierten Feldern:
```js
{
"pn_fcm": {
"notification": {
"title": "My Title",
"body": "Message sent at"
},
"pn_collapse_id": "collapse-id",
"pn_exceptions": [
"optional-excluded-device-token1"
]
},
"pn_debug": true,
"pn_dry_run": false
}
```
### Kanalgruppengrenzen
**Typ**: Neue Funktion
Der Stream Controller im Admin-Portal verfügt über eine neue, konfigurierbare Option zur [Begrenzung von Kanalgruppen](https://pubnub.com/docs/general/metadata/basics#configuration) für Kunden mit [kostenpflichtigen Tarifen](https://www.pubnub.com/pricing/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de), mit der Sie die maximale Anzahl von Kanälen festlegen können, die Kanalgruppen in einem Keyset haben können. Sie können entweder das Standardlimit von 1.000 Channels herabsetzen oder es auf 2.000 Channels erhöhen.

### Benutzer-Metadaten-Ereignisse im App-Kontext
**Typ**: Verbesserung
Wir haben die Dokumentation verbessert, um klarzustellen, dass bei aktivierter Option " **Benutzer-Metadaten-Ereignisse** " jede Änderung an einer Benutzerentität`(Setzen` und `Löschen`) dazu führt, dass Ereignisbenachrichtigungen an alle Mitgliedschaftsassoziationen gesendet werden, also sowohl an den Benutzer als auch an jeden Channel, in dem er Mitglied ist. Weitere Details finden Sie in der [Dokumentation](https://pubnub.com/docs/general/metadata/basics#app-context-events).

### App Context Konfigurationsabhängigkeit
**Typ**: Verbesserung
Wir haben die Dokumente zu den [Konfigurationsoptionen von App Context](https://pubnub.com/docs/general/metadata/basics#configuration) aktualisiert, um Informationen zu einer kritischen Abhängigkeit aufzunehmen.

Obwohl die Optionen **Disallow Get All Channel Metadata** und **Disallow Get All User Metadata** auf den ersten Blick selbsterklärend sind, funktionieren diese Optionen nur bei aktiviertem Access Manager.
Mit anderen Worten: Ohne Access Manager deaktivieren diese aktiven Optionen nicht wirklich den Abruf von Metadaten über Benutzer oder Kanäle auf einem Keyset. Wenn Sie den Zugriffsmanager aktivieren und damit standardmäßig den Zugriff auf alle Objekte in einem Keyset einschränken, können Sie die GET-Beschränkungen des Zugriffsmanagers für Benutzer und Channels leicht umgehen, indem Sie diese beiden Konfigurationsoptionen deaktivieren, ohne ein feinkörniges Berechtigungsschema zu erstellen.
Die Admin Portal UI wird diese Abhängigkeit bald ebenfalls widerspiegeln.
### Neues Flag für referenzielle Integrität im App-Kontext
**Typ**: Neue Funktion
Wir haben eine neue Option [**Referentielle Integrität für Mitgliedschaften erzwingen**](https://pubnub.com/docs/general/metadata/basics#configuration) hinzugefügt, die standardmäßig aktiviert wird, wenn Sie App Context für das Keyset Ihrer App im Admin Portal aktivieren.

Diese Option stellt sicher, dass Sie nur dann eine neue Mitgliedschaft einrichten können, wenn sowohl die Benutzer-ID als auch die Channel-ID, für die Sie die Mitgliedschaft erstellt haben, existieren. Gleichzeitig werden durch das Löschen einer übergeordneten Benutzer- oder Channel-Metadaten-Entität automatisch alle untergeordneten Mitgliedschaftszuordnungen für diese gelöschte Entität gelöscht. Auf diese Weise stellen Sie sicher, dass es keine fehlerhaften oder verwaisten Mitgliedschaftsobjekte in Ihrem Schlüsselsatz gibt.
SDKs 📦
-------
### Verbesserungen der Python-Dokumente
**Typ**: Verbesserung
Aufgrund des Feedbacks, das wir erhalten haben, haben wir die Informationen über die Verwendung und Ausführung von Methoden erweitert. Infolgedessen beschreibt jeder Abschnitt "Returns" in den [Python SDK-Dokumenten](https://pubnub.com/docs/sdks/python/api-reference/publish-and-subscribe) nun die Datenfelder, die von jeder Methode zurückgegeben werden. Außerdem wird erklärt, wie die Ausführung von sync (`.sync()`) und async (`.pn_async(callback)`) Anfragen die zurückgegebenen Daten für jede Methode beeinflusst.
### React SDK wurde veraltet
**Typ**: Verwerfungshinweis
Da wir das React SDK seit einiger Zeit nicht mehr aktiv weiterentwickelt haben, haben wir uns entschlossen, seine [Dokumente](https://pubnub.com/docs/sdks/react) offiziell abzulehnen und sie in den Abschnitt [Call For Contributions](https://pubnub.com/docs/sdks#call-for-contributions) in unseren Dokumenten zu verschieben.
Wenn Sie einen Fehler im React SDK finden oder seine Funktionalität erweitern wollen, können Sie gerne einen Pull Request im [Repo](https://github.com/pubnub/react) erstellen und auf unser Feedback warten!
Funktionen
----------
### Exportieren von Funktionsprotokollen durch Events & Actions
**Typ**: Neue Funktion
Jede PubNub-Funktion speichert Logs im internen `blocks-output-*` Kanal, wie `blocks-output-NSPiAuYKsWSxJl4yBn30`, der bis zu 250 Zeilen Logs speichern kann, bevor neue sie überschreiben. Wenn Sie den Überblick über alte Protokolle nicht verlieren wollen, können Sie [diese Protokolle](https://pubnub.com/docs/general/portal/functions#export-logs-through-events--actions) jetzt mit Events & Actions an einen externen Dienst [exportieren](https://pubnub.com/docs/general/portal/functions#export-logs-through-events--actions).

Einblicke 📊.
-------------
### Benutzerdauer und Gerätemetriken in den REST-API-Dokumenten
**Typ**: Verbesserung
[Letzten Monat](https://pubnub.com/docs/release-notes/2024/may#device-metrics-dashboard) haben wir die Gerätemetriken in das `User Behavior` Dashboard in den PubNub Insights auf dem Admin Portal eingeführt. Diesen Monat haben wir die [REST-API-Dokumente](https://pubnub.com/docs/sdks/rest-api/introduction-16) aktualisiert, um sowohl die Benutzerdauer als auch die Gerätemetriken aufzunehmen, so dass Sie die PubNub Insights API direkt aufrufen können, um die Metriken zu erhalten, die Sie interessieren.
BizOps Arbeitsbereich 🏢
------------------------
### Top 20 Nutzer/Kanäle
**Typ**: Neue Funktion
Wenn Sie App Context nicht zum Speichern und Verwalten von Benutzern und Kanälen verwenden, können Sie die entsprechenden Funktionen von BizOps Workspace dennoch testen, indem Sie Testdaten importieren.
Wenn Sie Zugang zu PubNub Insights haben, können Sie darauf zugreifen, indem Sie zu den Modulen **User Management** und **Channel Management** in BizOps Workspace im Admin Portal gehen und auf die Schaltfläche **Import from Insights** klicken.
Als Ergebnis importieren Sie aus dem Keyset Ihrer App maximal 20 Benutzer, die innerhalb des letzten Tages die meisten Nachrichten veröffentlicht haben (wenn gestern keine Nachrichten gesendet wurden, werden die Benutzer anhand der Daten vom Vortag importiert).

Ähnlich wie bei den Nutzern können Sie aus dem Keyset Ihrer App bis zu 20 Kanäle mit der höchsten Anzahl an Nachrichten importieren, die innerhalb des letzten Tages veröffentlicht wurden.

Nutzen Sie diese Testdaten, um herauszufinden, was BizOps Workspace zu bieten hat.
### Überarbeitetes Präsenzmanagement UX
**Typ**: Verbesserung
Wir haben vor kurzem das gesamte [Präsenzmanagement-Modul](https://pubnub.com/docs/bizops-workspace/presence-management) in BizOps Workspace überarbeitet, um den Assistenten für die Regelerstellung zu vereinfachen, die Farben der Abzeichen in umfassendere Farben zu ändern und eine "Catch All"-Musterkonfiguration hinzuzufügen, die die Standardeinstellung "Präsenz auf allen Kanälen aktivieren" der Präsenzkonfiguration auf dem Keyset widerspiegelt.

Wir hoffen, dass Ihnen das neue Aussehen und die Bedienung gefällt! | pubnubdevrel |
1,915,769 | Exploring GrantPharmacy's Foracort: Benefits of Formoterol & Budesonide | Introduction Living with respiratory issues can be challenging. However, advancements in medical... | 0 | 2024-07-08T12:56:39 | https://dev.to/pharmapro335/exploring-grantpharmacys-foracort-benefits-of-formoterol-budesonide-4da8 |
Introduction
Living with respiratory issues can be challenging. However, advancements in medical treatments offer hope and relief. One such breakthrough is **[GrantPharmacy's](https://www.grantpharmacy.com/)** Foracort, a combination of Formoterol and Budesonide. This article delves into the benefits of Foracort, highlighting why it’s a game-changer for many patients.
The Basics of Foracort
What is Foracort?
[Foracort ](https://www.grantpharmacy.com/formoterol-budesonide-rotacaps-foracort)is a combination inhaler that merges two potent medications: Formoterol and Budesonide. Formoterol is a long-acting bronchodilator, while Budesonide is an anti-inflammatory corticosteroid. Together, they provide comprehensive management for respiratory conditions.
| pharmapro335 | |
1,915,770 | The Essential Role of Hot Air Ovens in Laboratories | A hot air oven is a vital tool in laboratories, particularly in microbiology, for sterilizing... | 0 | 2024-07-08T12:58:18 | https://dev.to/presto_group/the-essential-role-of-hot-air-ovens-in-laboratories-1hn2 |

A **[hot air oven](https://www.prestogroup.com/articles/what-is-the-use-of-hot-air-ovens-in-the-microbiology-industry/)** is a vital tool in laboratories, particularly in microbiology, for sterilizing glassware, metal instruments, and other heat-resistant materials. Using dry heat, these ovens ensure that items are free from microbial contamination by exposing them to high temperatures for specific periods. This process is crucial for maintaining the accuracy and reliability of experimental results. Additionally, hot air ovens are used for drying laboratory materials, preparing culture media, and conducting heat-based experiments. Their efficiency, versatility, and ability to provide a controlled environment make them indispensable in scientific research and industrial applications. | presto_group | |
1,915,773 | Exploring the Latest Features and Enhancements in .NET 8 | As a .NET developer, a .NET development company or a development enthusiast, we all know that... | 0 | 2024-07-08T12:59:24 | https://dev.to/whotarusharora/exploring-the-latest-features-and-enhancements-in-net-8-23jc | webdev, dotnet, performance, vscode | As a .NET developer, a .NET development company or a development enthusiast, we all know that Microsoft has released the new dotnet version. This time, we have the .NET 8 in the market, which seems to be quite advanced, high in performance, and a complete package of avant-garde features.
However, it's a new technology, and that's why everyone takes a step back before utilizing it. Due to this, here I have listed the top five features and enhancements that make it a considerable development technology. In addition, undergoing the enhancements will also help you gain insight and understand Microsoft .NET 8 in a better way.
So, let’s get started.
## The Top Features and Enhancements of .NET 8
Following are the top five features and enhancements of .NET 8 that make dotnet a better choice in 2024 and until the next .NET release.
### #1: Better Performance and Scalability
Whenever a new .NET version is released, you are going to experience better performance than previous ones. This factor is consistently focused by Microsoft so that .NET applications run faster and smoother than ever.
In addition, to improve the performance of .NET 8, Microsoft focused on garbage collection, JIT, and support for ARM64. All these mechanisms together contributed to improving the app's performance.
* The GC (Garbage Collector) was tuned to minimize latency, better the throughput, and maintain memory in high-traffic hours.
* The new JIT in .NET 8 helps in faster execution and conversion of intermediate to machine code. Due to this, both server and client-side benefits and app loading are also reduced.
* With the updates to support ARM64, .NET 8 makes you capable of creating high-performing applications compatible with this platform.
### #2: Improved Blazor Functionalities
Blazor is an integral component of the .NET ecosystem. You can utilize it for curating intuitive interfaces using the C# programming language. With the release of .NET 8, it has also received some exclusive improvements, such as:
* The update provided to WebAssembly of Blazor has made it faster, which is helping the developers to reduce the loading time. Mainly, the tools utilized at runtime are improved so that the interface can function well with the backend logic.
* You will find a new era of Blazor with .NET 8, as it introduces the Blazor Hybrid. It enables you to combine the benefits of Blazor server and WebAssembly. Because of this, you unlock the potential to leverage the utmost flexibility and performance to handle millions of users with ease.
### #3: Additional Robustness To ASP.NET Core
In the realm of the .NET ecosystem, ASP.NET Core is always going to be in your discussions. In Microsoft dotnet 8, it has got some really amazing improvements. The main advancements in ASP.NET Core are minimal APIs, rate-limiting middleware, and SignalR enhancement.
In addition, due to minimal APIs, the developers are able to build lightweight .NET applications. As a result, less boilerplate code is analyzed, leading to the easy and quick maintenance of the software. Further, with the help of rate-limiting middleware, you will be able to secure APIs from abuse and ensure their relevant and fair utilization.
Lastly, the advancement in SignalR helps to optimize real-time communication. It has fastened the data processing capability, leading organizations to offer updates with zero to minimal delays.
### #4: gRPC Enhancements
In .NET applications, gRPC helps to enable bidirectional communication with the servers. It ensures that all the transactions are completed simultaneously and that the user device and server are communicating seamlessly.
In addition, its performance and stability have a major impact on the application. Due to this, in .NET 8, gRPC is highly modified, which makes the usage of protocol buffers more efficient. Moreover, it has received some new tooling, enabling developers to effectively build, test, and debug the gRPC service.
Furthermore, now you can also integrate with the Visual Studio IDE and configure its operation from your coding platform.
### #5: New-Age Security and Compliance
From the initial release of .NET, Microsoft has always focused on data security and compliance. And from the very beginning dotnet has provided all the relevant mechanisms in the form of default settings and updates to maintain data integrity.
This time, with the release of .NET 8, the following three main security and compliance updates are provided:
* New cryptographic algorithms are added to the .NET suite, which will help you prevent attackers, and ensure data confidentiality and integrity.
* You can configure multi-factor authentication using the built-in components and the OpenID Connect and OAuth 2.0 protocols.
* The tools provided with the .NET 8 package can aid you in ensuring that the application aligns with relevant standards, such as GDPR, PCI-DSS, and HIPAA. In addition, guidelines to align with other regulatory compliance frameworks are also offered.
## Concluding Up
From all the above aspects, we can conclude that .NET 8 has mainly focused on performance, data security, compliance, Blazor, .NET Core, and gRPC. You can use all these features to build highly stable, secure, and scalable applications that can exceptionally boost productivity.
.NET is a feature-rich technology, and you can’t ignore it with the dotnet 8 version or beyond. And once you utilize it, you’ll get to know about its potential.
| whotarusharora |
1,915,774 | Interceptors en NestJS | Introduction Dans cet article, nous allons plonger dans le concept des intercepteurs... | 0 | 2024-07-08T12:59:34 | https://dev.to/bilongodavid/interceptors-en-nestjs-1jgb | javascript, node, nestjs |

### Introduction
Dans cet article, nous allons plonger dans le concept des intercepteurs dans Nest.js. Les intercepteurs permettent de transformer les données entrantes ou sortantes, d'effectuer des tâches supplémentaires avant ou après l'exécution d'une méthode de route, et bien plus encore. Ils sont un outil puissant pour gérer les aspects transversaux de vos applications.
### Qu'est-ce qu'un Intercepteur ?
Les intercepteurs dans Nest.js sont des classes annotées avec `@Injectable()` qui implémentent l'interface `NestInterceptor`. Ils interviennent dans le cycle de requête/réponse et peuvent transformer ou manipuler les données à différentes étapes.
### Exemple de Code
Voici un exemple simple pour illustrer comment créer un intercepteur dans Nest.js :
```typescript
import { Injectable, NestInterceptor, ExecutionContext, CallHandler } from '@nestjs/common';
import { Observable } from 'rxjs';
import { map } from 'rxjs/operators';
@Injectable()
export class TransformInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, next: CallHandler): Observable<any> {
console.log('Before...');
const now = Date.now();
return next
.handle()
.pipe(
map(data => ({
data,
timestamp: new Date().toISOString(),
duration: `${Date.now() - now}ms`,
})),
);
}
}
```
### Explication du Code
- `TransformInterceptor` implémente `NestInterceptor`.
- La méthode `intercept` reçoit le contexte d'exécution (`ExecutionContext`) et le gestionnaire d'appel (`CallHandler`).
- Nous ajoutons une logique pour mesurer la durée de l'exécution et ajouter un timestamp aux données de réponse.
- `next.handle()` continue le traitement de la requête et nous utilisons l'opérateur `map` de RxJS pour transformer les données de la réponse.
### Utilisation dans un Contrôleur
Pour utiliser cet intercepteur dans un contrôleur, vous pouvez l'appliquer à une route spécifique ou globalement :
```typescript
import { Controller, Get, UseInterceptors } from '@nestjs/common';
import { TransformInterceptor } from './transform.interceptor';
@Controller('users')
@UseInterceptors(TransformInterceptor)
export class UsersController {
@Get()
findAll(): string {
return 'This action returns all users';
}
}
```
### Conclusion
Les intercepteurs sont extrêmement utiles pour implémenter des fonctionnalités transversales telles que la transformation des réponses, le logging, et bien plus encore. En utilisant les intercepteurs, vous pouvez centraliser et réutiliser ces fonctionnalités à travers votre application.
### Prochaines Étapes
- Explorez d'autres types d'intercepteurs pour des cas d'utilisation spécifiques comme le logging, la gestion des erreurs, et la mise en cache.
- Combinez les intercepteurs avec d'autres fonctionnalités avancées de Nest.js comme les guards et les pipes pour une gestion complète des requêtes et des réponses.
### Ressources Additionnelles
- [Documentation officielle Nest.js sur les intercepteurs](https://docs.nestjs.com/interceptors)
| bilongodavid |
1,915,775 | Notes de mise à jour de la documentation - juin 2024 | Découvrez tous les points forts de la documentation de juin 2024. | 0 | 2024-07-08T13:01:31 | https://dev.to/pubnub-fr/notes-de-mise-a-jour-de-la-documentation-juin-2024-2pfg | pubnub, documentation, releases, releasenotes | Cet article a été publié à l'origine sur [https://www.pubnub.com/docs/release-notes/2024/june](https://www.pubnub.com/docs/release-notes/2024/june?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr)
Bonjour à tous ! Nous avons quelques nouvelles mises à jour pour vous ce mois-ci.
- Nous avons introduit un nouveau drapeau d'intégrité référentielle pour aider à garder vos données cohérentes.
- Vous pouvez maintenant définir des limites de groupes de canaux directement à partir du portail d'administration.
- Essayez d'importer des données d'Insights vers BizOps pour tester ses fonctionnalités.
- De plus, vous remarquerez que l'aspect et la convivialité de la gestion de la présence ont été revus.
Par ailleurs, nous avons apporté un certain nombre d'améliorations mineures mais significatives à la documentation qui, nous l'espérons, répondra à certaines de vos questions ou dissipera les doutes que vous aviez lorsque vous travailliez avec PubNub.
Bonne exploration et merci de faire partie de notre communauté !
Généralités 🛠️
---------------
### Champs personnalisés dans les payloads FCM
**Type**: Amélioration
Nous avons corrigé la documentation pour les [notifications push mobiles Android](https://pubnub.com/docs/general/push/android#step-5-construct-the-push-payload) en ajoutant les paramètres PubNub personnalisés manquants que vous pouvez ajouter à votre payload de notification push mobile FCM : `pn_debug`, `pn_exceptions`, et `pn_dry_run`.
Ils vous permettront de tester ou de déboguer les notifications et d'exclure certains appareils de la réception des notifications.
Voici un exemple de charge utile FCM avec nos champs personnalisés :
```js
{
"pn_fcm": {
"notification": {
"title": "My Title",
"body": "Message sent at"
},
"pn_collapse_id": "collapse-id",
"pn_exceptions": [
"optional-excluded-device-token1"
]
},
"pn_debug": true,
"pn_dry_run": false
}
```
### Channel group limits
**Type**: Nouvelle fonctionnalité
Le contrôleur de flux dans le portail d'administration dispose d'une nouvelle option configurable de [limite de groupe de](https://pubnub.com/docs/general/metadata/basics#configuration) canaux pour les clients bénéficiant de [plans tarifaires payants](https://www.pubnub.com/pricing/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr), qui vous permet de définir les limites du nombre maximum de canaux que les groupes de canaux d'un ensemble de clés peuvent avoir. Vous pouvez abaisser la limite par défaut de 1 000 canaux ou l'augmenter jusqu'à 2 000 canaux.

### Événements de métadonnées de l'utilisateur dans le contexte de l'application
**Type**: Amélioration
Nous avons amélioré la documentation pour clarifier qu'avec l'option " **User Metadata Events** " activée, toute modification d'une entité utilisateur`(définition` et `suppression`) entraîne l'envoi de notifications d'événements à toutes les associations de membres, donc à la fois à cet utilisateur et à tous les canaux dont il est membre. Se référer à la [documentation](https://pubnub.com/docs/general/metadata/basics#app-context-events) pour plus de détails.

### Dépendance de configuration du contexte d'application
**Type**: Amélioration
Nous avons mis à jour la documentation sur les [options de configuration du contexte applic](https://pubnub.com/docs/general/metadata/basics#configuration) atif afin d'inclure des informations sur une dépendance critique.

Bien que les options **Disallow Get All Channel Metadata** et **Disallow Get All User Metadata** semblent assez explicites, la mise en garde est que ces options ne fonctionnent qu'avec un gestionnaire d'accès activé.
En d'autres termes, sans le gestionnaire d'accès, ces options actives ne désactivent pas réellement l'obtention de métadonnées sur les utilisateurs ou les canaux d'une unité centrale. Parallèlement, lorsque vous activez le gestionnaire d'accès, restreignant ainsi par défaut l'accès à tous les objets d'un jeu de clés, vous pouvez facilement contourner les restrictions GET du gestionnaire d'accès pour les utilisateurs et les canaux en décochant ces deux options de configuration sans créer un schéma de permissions à granularité fine.
L'interface utilisateur du portail d'administration reflétera bientôt cette dépendance.
### Nouveau drapeau d'intégrité référentielle dans App Context
**Type**: Nouvelle fonctionnalité
Nous avons ajouté une nouvelle option " [**Enforce referential integrity for memberships**](https://pubnub.com/docs/general/metadata/basics#configuration) ", qui est activée par défaut lorsque vous activez App Context sur le jeu de clés de votre application dans le portail d'administration.

Cette option garantit que vous ne pouvez définir une nouvelle adhésion que si l'ID de l'utilisateur et l'ID du canal pour lesquels vous avez créé l'adhésion existent tous les deux. Parallèlement, la suppression d'un utilisateur parent ou d'une entité de métadonnées de canal supprime automatiquement toutes les associations d'affiliation enfant pour l'entité supprimée. De cette façon, vous vous assurez qu'il n'y a pas d'objets d'appartenance dysfonctionnels ou orphelins sur votre ensemble de clés.
SDKs 📦
-------
### Amélioration de la documentation Python
**Type**: Amélioration
Suite aux commentaires que nous avons reçus, nous avons étendu les informations sur l'utilisation et l'exécution des méthodes. Ainsi, chaque section Returns de la [documentation du SDK P](https://pubnub.com/docs/sdks/python/api-reference/publish-and-subscribe) ython décrit désormais les champs de données renvoyés par chaque méthode. Elle explique également comment l'exécution des requêtes sync (`.sync()`) et async (`.pn_async(callback)`) influence les données renvoyées pour chaque méthode.
### React SDK a été déprécié
**Type**: Avis de dépréciation
Comme nous n'avons pas développé activement le React SDK depuis un certain temps, nous avons décidé de déprécier officiellement sa [documentation](https://pubnub.com/docs/sdks/react) et de la déplacer dans la section [Call For Contributions (appel à contribution](https://pubnub.com/docs/sdks#call-for-contributions) ) de notre documentation.
Si vous trouvez un bug dans le SDK React ou si vous voulez étendre ses fonctionnalités, n'hésitez pas à créer une pull request dans le [repo](https://github.com/pubnub/react) et attendez notre feedback !
Fonctions
---------
### Exporter les logs de fonctions à travers Events & Actions
**Type**: Nouvelle fonctionnalité
Chaque fonction PubNub enregistre les logs dans le canal interne `blocks-output-*`, comme `blocks-output-NSPiAuYKsWSxJl4yBn30`, qui peut stocker jusqu'à 250 lignes de logs avant que de nouveaux logs ne les écrasent. Si vous ne voulez pas perdre la trace des anciens journaux, vous pouvez désormais utiliser Events & Actions pour [exporter ces journaux](https://pubnub.com/docs/general/portal/functions#export-logs-through-events--actions) vers un service externe.

Perspectives 📊
---------------
### Durée de l'utilisateur et métriques de l'appareil dans les documents de l'API REST
**Type**: Amélioration
Le[mois dernier](https://pubnub.com/docs/release-notes/2024/may#device-metrics-dashboard), nous avons introduit les métriques d'appareils dans le tableau de bord `Comportement de l'utilisateur` dans PubNub Insights sur le portail d'administration. Ce mois-ci, nous avons mis à jour la [documentation de l'API REST](https://pubnub.com/docs/sdks/rest-api/introduction-16) pour inclure à la fois la durée de l'utilisateur et les métriques de l'appareil, de sorte que vous pouvez appeler l'API PubNub Insights directement pour obtenir les métriques qui vous intéressent.
Espace de travail BizOps 🏢
---------------------------
### Les 20 premiers utilisateurs/canaux
**Type**: Nouvelle fonctionnalité
Si vous n'utilisez pas App Context pour stocker et gérer les utilisateurs et les canaux, vous pouvez toujours tester les fonctionnalités connexes de BizOps Workspace en important des données de test.
Si vous avez accès à PubNub Insights, vous pouvez y accéder en allant aux modules **Gestion des utilisateurs** et **Gestion des canaux** dans BizOps Workspace dans le portail d'administration et en cliquant sur le bouton **Importer à partir d'Insights**.
En conséquence, vous importerez depuis le jeu de clés de votre application un maximum de 20 utilisateurs qui ont publié le plus grand nombre de messages au cours de la dernière journée (si aucun message n'a été envoyé hier, les utilisateurs seront importés sur la base des données de la veille).

Comme pour les utilisateurs, vous pouvez importer de l'ensemble de clés de votre application jusqu'à 20 canaux ayant publié le plus grand nombre de messages au cours de la dernière journée.

Utilisez ces données de test pour explorer ce que BizOps Workspace a à offrir.
### L'interface utilisateur de la gestion de la présence revue et corrigée
**Type**: Amélioration
Nous avons récemment redessiné l'ensemble du module de [gestion de la présence](https://pubnub.com/docs/bizops-workspace/presence-management) dans BizOps Workspace afin de simplifier l'assistant de création de règles, de changer les couleurs des badges pour des couleurs plus inclusives et d'ajouter un modèle de configuration " catch all " qui reflète la configuration par défaut " activer la présence sur tous les canaux " de la configuration de la présence sur l'ensemble de clés.

Nous espérons que vous apprécierez sa nouvelle apparence et sa nouvelle convivialité ! | pubnubdevrel |
1,915,776 | வணக்கம் | A post by RajeshMurugan | 0 | 2024-07-08T13:01:31 | https://dev.to/rajeshmurugan95/vnnkkm-54dj | rajeshmurugan95 | ||
1,915,777 | Git - principales comandos | Establecer el Nombre de Usuario: git config --global user.name "Tu Nombre" Enter... | 0 | 2024-07-08T13:04:06 | https://dev.to/fernandomoyano/git-principales-comandos-50kf |

---
1. Establecer el Nombre de Usuario:
```bash
git config --global user.name "Tu Nombre"
```
1.1.Chequear que el nombre se haya guardado bien
```bash
git config user.name
```
2. Establecer el Correo Electrónico del Usuario:
```bash
git config --global user.email "tuemail@dominio.com"
```
3. Chequear que el email se haya guardado bien
```
git config user.email
```
4. Comando para mostrar configuracion actual de Git:
```bash
git config --list
```
5. Comando para obtener ayuda de Git
```bash
git help <comando>
```
# Principales comandos para trabajar con proyectos:
---
1. **Iniciar un repositorio local**
```bash
git init mi_repositorio
```
2. **Chequear el estado del repositorio**
```bash
git status
```
-Observamos el mensaje **untracked files** (archivos sin seguimiento)
estado: Working directory - **Directorio de trabajo**
3. **Si hay archivos que no deseamos agregar usamos un archivo **.gitignore** para especificar los mismos.**
```bash
*concidencia comodin
/ se usa para ignorar las rutas relativas al archivo .gitignore.
# es usado para agregar comentarios
```
4. **Agregamos los archivos al staging Area**
```bash
git add
```
5. **Verificamos que los archivos efectivamente se hayan agregados**
```bash
git status
```
Observamos por consola el mensaje Changes to be commited, y el archivo pintado de verde
6. **Confirmamos cambios y enviamos los archivos a nuestro repositorio local**
```bash
git commit -m "mi comentario¨
```
agregamos un comentario descriptivo de lo que venimos haciendo.
7. **Ver el historial de commits**
```bash
git log
```
Este comando mostrará una lista de commits en el historial del repositorio, incluyendo:
con la tecla "q" matamos el proceso y continuamos usando la terminal.
- El hash SHA-1 del commit.
- El autor del commit.
- La fecha del commit.
- El mensaje del commit.
8. **Comando para eliminar un repositorio.(🪛Mucho cuidado con este comando)**
```bash
git rm -rf nombre-del-repositorio
```
# Trabajo con ramas.
---
1. **Crear una nueva rama.**
```bash
git branch nombre-de-la-rama
```
Crea una nueva rama llamada nombre-de-la-rama basada en la rama actual.
2. **Cambiar a una rama diferente**
```bash
git checkout nombre-de-la-rama
```
O usando la nueva sintaxis con **switch**
```bash
git switch nombre-de-la-rama
```
Cambia a la rama nombre-de-la-rama.
3. **Crear y cambiar a una nueva Rama:**
```bash
git checkout -b nombre-de-la-rama
```
Crea una nueva rama y cambia a ella inmediatamente.
4. **Listar todas las ramas:**
```bash
git branch
```
Muestra todas las ramas locales. La rama actual se indica con un asterisco `*`
5. **Eliminar una Rama:**
```bash
git branch -d nombre-de-la-rama
```
Elimina la rama nombre-de-la-rama si ha sido completamente integrada en la rama actual.
Para forzar la eliminación de una rama (incluso si no está completamente integrada):
```bash
git branch -D nombre-de-la-rama
```
6. **Renombrar una rama:**
```bash
git branch -m nuevo-nombre
```
Renombra la rama actual a nuevo-nombre.
7. **Combinar (Merge) una Rama en la Rama Actual:**
```bash
git merge nombre-de-la-rama
```
8. **Rebase de una Rama:**
```bash
git rebase nombre-de-la-rama
```
Rebase la rama actual sobre la rama nombre-de-la-rama.
# Comandos avanzados para Ramas en Git.
---
1. **Mostrar la rama actual:**
```bash
git symbolic-ref --short HEAD
```
Muestra el nombre de la rama actual.
2. **Mostrar el historial de Commits por rama:**
```bash
git log nombre-de-la-rama
```
Muestra el historial de commits de la rama nombre-de-la-rama.
3. **Comparar dos ramas:**
```bash
git diff rama1..rama2
```
Muestra las diferencias entre rama1 y rama2.
4. **Listar ramas remotas:**
```bash
git branch -r
```
Muestra todas las ramas remotas.
5. **Listar todas las ramas (Locales y remotas)**
```bash
git branch -a
```
6. **Eliminar una rama remota.**
```bash
git push origin --delete nombre-de-la-rama
```
Elimina la rama nombre-de-la-rama del repositorio remoto.
7. **establecer la rama por defecto en remoto**
```bash
git push -u origin nombre-de-la-rama
```
Establece nombre-de-la-rama como la rama por defecto en el repositorio remoto y configura el seguimiento.
# Revertir Cambios.
---
1. **Revertir un commit Confirmado (Usando git reset)** Estos son cambios que han sido añadidos al historial del repositorio mediante un commit.
**git reset --hard**
Este comando deshace todos los cambios realizados desde el commit especificado y restablece el estado del árbol de trabajo al commit especificado.
Usar **--hard** elimina permanentemente todos los cambios no confirmados, así que asegúrate de que realmente quieres deshacer todos los cambios.
```bash
git reset --hard <commit_hash>
```
**git reset --soft** Mueve el puntero de HEAD al commit especificado, manteniendo los cambios en el área de preparación.
```bash
git reset --soft <commit_hash>
```
1.3. **git reset --mixed** Mueve el puntero de HEAD al commit especificado, manteniendo los cambios en el árbol de trabajo.
```bash
git reset --mixed <commit_hash>
```
2. **Revertir un commit Confirmado (Usando git revert)**
```bash
git revert <commit_hash>
```
Este comando crea un nuevo commit que deshace los cambios del commit especificado. Es útil porque no altera el historial del commit.
3. **Restablecer al último commit confirmado**
```bash
git reset --hard HEAD
```
Si quieres simplemente deshacer todos los cambios no confirmados y volver al último commit confirmado:
4. **Reestablecer cambios No Confirmados (Uncommitted Changes)** (Estos son cambios que has realizado en los archivos de tu repositorio, pero que aún no has confirmado con un commit.)
_Cambios en el Árbol de Trabajo (Working Directory)_
_Cambios Añadidos al Área de Preparación (Staging Area)_
4.1 **git reset**
```bash
git reset
```
Si los cambios han sido añadidos (staged) pero no confirmados, puedes utilizar git reset para deshacer esta acción.
4.2 **git checkout**
```bash
git checkout -- <archivo>
```
Si los cambios no han sido añadidos al área de preparación, puedes usar git checkout para restablecerlos al último commit.
| fernandomoyano | |
1,915,778 | My latest project with MERN stack | Techniques used: ReactJs Mongodb Mongoose ExpressJs JWT Nodejs Mantine Core Mantine... | 0 | 2024-07-08T13:06:10 | https://dev.to/a7med-amnt/my-latest-project-with-mern-stack-12dl | webdev, beginners, mern, programming | **Techniques used**:
- ReactJs
- Mongodb
- Mongoose
- ExpressJs
- JWT
- Nodejs
- Mantine Core
- Mantine Form
**Features**:
- CRUD Operations for Projects & personal info
- Dark mod
- Translations
- Dashboard
_Project url_: [a7med-amnt](https://mys-9yfx.onrender.com/)
_Project code_: [github-a7med-amnt](https://github.com/a7med-amnt/ahmed-amnt)
| a7med-amnt |
1,915,781 | Let connect | Hi, i am CrownCode, a web developer, Let connect and grow toether, drop ur number or github handle,... | 0 | 2024-07-08T13:07:32 | https://dev.to/crown_code_43cc4b866d2688/let-connect-2gi4 | webdev, javascript, beginners, programming | Hi, i am CrownCode, a web developer, Let connect and grow toether, drop ur number or github handle, preferably number.... | crown_code_43cc4b866d2688 |
1,915,785 | In-Demand Data Analyst Skills To Get Easily Hired in 2024 | Introduction The demand for data analysts continues to soar as businesses across various... | 0 | 2024-07-08T13:19:53 | https://dev.to/sejal_4218d5cae5da24da188/in-demand-data-analyst-skills-to-get-easily-hired-in-2024-3709 | ## Introduction
The demand for data analysts continues to soar as businesses across various sectors recognize the value of data-driven decision-making. To stand out in the competitive job market of 2024, aspiring data analysts need to master a specific set of skills. This blog highlights the most sought-after skills that will make you a highly desirable candidate for data analyst roles.
## 1. Mastery of SQL
Structured Query Language (SQL) is essential for any data analyst. It is the universal language for interacting with databases, allowing analysts to efficiently update, organize, and retrieve data from relational databases. Mastery of SQL enables analysts to:
• **Query Databases**: Extract meaningful insights by writing efficient queries.
• **Modify Data Structures**: Adapt schemas to fit the needs of various data projects.
• **Ensure Data Integrity**: Maintain the accuracy and consistency of data.
## 2. Proficiency in Statistical Programming Languages
Languages like R and Python are invaluable for conducting advanced data analyses. They offer significant advantages over traditional spreadsheet software, enabling data analysts to:
• **Clean and Prepare Data**: Handle large datasets more efficiently.
• **Perform Complex Analyses**: Implement sophisticated statistical methods and machine learning algorithms.
• **Create Visualizations**: Generate informative and aesthetically pleasing data visualizations.
## 3. Understanding Machine Learning
Machine learning is a critical skill for modern data analysts. This branch of AI involves creating algorithms that can learn from data and improve over time. Key aspects include:
**• Algorithm Development**: Build models to identify patterns and make predictions.
**• Data Processing**: Prepare and preprocess data for machine learning applications.
**• Model Evaluation**: Assess the performance and accuracy of machine learning models.
## 4. Strong Foundations in Probability and Statistics
A solid understanding of probability and statistics is crucial for data analysts. These skills allow analysts to:
• **Interpret Data**: Draw meaningful conclusions from data.
• **Make Predictions**: Use statistical methods to forecast future trends.
• **Design Experiments**: Plan and analyze experiments to test hypotheses.
## 5. Effective Project Management
Project management skills are vital for data analysts to manage their workload efficiently. This involves:
• **Task Organization**: Break down projects into manageable tasks.
• **Time Management**: Prioritize tasks to meet deadlines.
• **Team Collaboration**: Work effectively with colleagues and stakeholders.
## 6. Competence in Data Management
Data management encompasses the processes of collecting, organizing, and storing data. Data analysts need to:
• **Ensure Data Security**: Protect data from unauthorized access and breaches.
• **Maintain Data Quality**: Ensure data is accurate and up-to-date.
• **Optimize Data Storage**: Use cost-effective methods to store large volumes of data.
## 7. Expertise in Statistical Visualization
Data visualization is a crucial skill for presenting data insights. Data analysts should be able to:
• **Create Clear Visuals**: Develop charts, graphs, and dashboards that effectively communicate data insights.
• **Tell a Story**: Use visualizations to convey a compelling narrative.
• **Drive Decision-Making**: Help stakeholders make informed decisions based on visual data representations.
## 8. Soft Skills and Critical Thinking
Beyond technical skills, data analysts need strong soft skills and critical thinking abilities. This includes:
• **Communication Skills**: Clearly explain findings to non-technical stakeholders.
• **Problem-Solving Skills**: Approach data challenges with innovative solutions.
• **Collaboration Skills**: Work effectively within a team.
## Conclusion
Mastering these in-demand skills will make you a highly attractive candidate in the data analytics job market of 2024. Stay ahead of the curve by continuously developing these skills and applying them to real-world problems.
For more detailed insights into the skills needed for data analysts and the latest trends in the field, read our comprehensive blog on the [Pangaea X](https://www.pangaeax.com/2023/09/29/in-demand-data-analyst-skills-to-get-easily-hired-in-2024/).
| sejal_4218d5cae5da24da188 | |
1,915,807 | How to Obtain a Canada Temporary Number for WhatsApp Verification | In today's digital age, having a virtual phone number can be incredibly beneficial for various... | 0 | 2024-07-08T13:20:53 | https://dev.to/legitsms/how-to-obtain-a-canada-temporary-number-for-whatsapp-verification-3n11 | web3, chatgpt, web, help | In today's digital age, having a virtual phone number can be incredibly beneficial for various reasons, including maintaining privacy and managing multiple accounts. One popular use case is verifying WhatsApp accounts. This guide will walk you through the step-by-step process of obtaining a Canada virtual number from Legitsms.com for WhatsApp verification. By following this guide, you can ensure a smooth and hassle-free verification process.
Why Use a Canada Temporary Number for WhatsApp Verification?
Using a [Canada temporary number](legitsms.com) for WhatsApp verification offers several advantages:
**- Privacy Protection:** Keeps your number private.
**- Convenience**: Manage multiple accounts without needing multiple physical SIM cards.
**- Accessibility:** Ideal for businesses or individuals who need a Canadian presence.
Getting Started with legitsms.com
Legitsms.com is a trusted platform that provides virtual phone numbers for SMS verification across various platforms, including WhatsApp, Discord, Telegram, Gmail, Tinder, Facebook, and more. Here's how you can use it to obtain a Canada-free SMS number.
Step 1: Sign Up on legitsms.com
First, visit legitsms.com and create an account. The sign-up process is straightforward:
1. Click on the "Sign Up" button.
2. Fill in your details including your email address and create a password.
Step 2: Make a Deposit
Once your account is set up, you need to make a deposit. The minimum deposit amount is $5, which is quite affordable and ensures you have sufficient funds for multiple verifications.
1. Log in to your account.
2. Navigate to the "Add Fund" section.
3. Choose your preferred payment method and deposit at least $5 into your account. We accept Bitcoin, Litecoin, Monero, USDT, Bank Card, and other payment methods.
Step 3: Select WhatsApp Service
After you fund your account, navigate to the left corner of the site and select the WhatsApp service.
1. Go to the "Services" section.
2. Choose "WhatsApp" from the list of available services.
Step 4: Choose Canada as the Country
Next, you need to select Canada as the country for your virtual number.
1. In the services section, scroll down or use the search bar to choose a country.
2. Select "Canada" from the dropdown menu.
Step 5: Obtain the Virtual Number
Now, you can obtain your Canada temporary number.
1. The platform will display a generated Canada number on your dashboard.
2. Copy the number, which you'll use for WhatsApp verification.
Step 6: Use the Number for WhatsApp Verification
With your Canada virtual number ready, follow these steps to verify your WhatsApp account:
1. Open WhatsApp and begin the registration process.
2. Enter the Canada temporary number when prompted.
3. Wait for the activation code to be sent to the virtual number.
Step 7: Retrieve the Activation Code
Once the activation code is sent, it will appear on your legitsms.com dashboard.
1. Keep an eye on your dashboard for incoming SMS messages.
2. Copy the received activation code.
Step 8: Complete WhatsApp Verification
Enter the activation code you received on WhatsApp to complete the verification process.
1. Paste the code into the verification field on WhatsApp.
2. Your WhatsApp account is now verified using the Canada temporary number.
Advantages of Using legitsms.com for Canada sms verification
Legitsms.com offers several benefits for obtaining virtual numbers:
- Reliability: You are ONLY charged after a successful SMS receipt.
- Ease of Use: User-friendly interface and straightforward process.
- Cost-Effective: Affordable pricing for virtual numbers. Our Canada temporary numbers start from 0.60$
Conclusion
Using a Canada virtual number for WhatsApp verification is a practical solution for privacy and convenience. Legitsms.com simplifies this process, providing reliable and affordable virtual numbers. Follow the steps outlined in this guide, you can easily obtain and use a Canada temporary number for your WhatsApp verification needs.
FAQs
1. Is obtaining a Canada virtual number from legitsms.com secure?
legitsms.com uses secure protocols to protect your information and ensure that obtaining a virtual number is safe.
2. Can I use the Canada temporary number for other platforms besides WhatsApp?
Yes, the virtual number from legitsms.com can be used for SMS verification on various platforms, not just WhatsApp.
3. How quickly can I receive the activation code after requesting it?
The activation code is usually received instantly but may take a few minutes depending on the platform's network traffic.
4. What happens if I do not receive the activation code?
If you did not receive the activation code, check the number is correct and there are no connectivity issues. You can blacklist the number and choose another one, you are only charged after a successful SMS delivery.
5. [Can I reuse the Canada temporary number for multiple verifications?](legitsms.com)
Typically, virtual numbers are intended for single-use verification. For multiple verifications, you may need to obtain new numbers. Check this guide on how to use Canada's free SMS number for other verification.
Following these steps and tips, you can efficiently manage your WhatsApp verification using a Canada virtual number, ensuring privacy and convenience. | legitsms |
1,915,844 | Let's Measure Performance with Playwright Feat: Chrome Trace | Introduction Hello there. I recently wrote an article on optimizing the performance of my... | 0 | 2024-07-08T13:22:58 | https://dev.to/moondaeseung/lets-measure-performance-with-playwright-feat-chrome-trace-3ino | # Introduction
Hello there. I recently wrote an article on optimizing the performance of my library, [Flitter](https://flitter.dev/). Every time I measured the library's performance, I had to manually access `chrome devtools` and press the record button. In this post, we'll look at how to record and track performance for sustainable optimization.
### Library Operation
Before explaining the measurement process, let's discuss how the library we're measuring operates.

Similar to how React divides its process into the “Render Phase” and “Commit Phase,” Flitter operates in two distinct actions: build (mount) and draw.
- **Build (Mount):**
Flitter starts by creating a virtual DOM corresponding to the SVG whenever a state change occurs. If it’s the initial build, instead of the rebuild method, a mount action is invoked.
- **Draw:**
Unlike React, the draw phase in Flitter includes calculating the layout. Once it determines where each SVG element should be positioned, it calls the paint method to reflect the changes in the actual DOM.

# **Let’s Measure with Playwright**
Playwright is a Node.js library designed for browser automation. Developed by Microsoft, it supports major browsers like Chrome (Chromium), Firefox, and Safari (Webkit). Though primarily aimed at automating web application testing, Playwright can also be used for web scraping and web UI automation tasks.
```tsx
test('Capture performance traces ans save json file on diagram is rendered', async ({
page,
browser
}) => {
await browser.startTracing(page, {
path: `./performance-history/${formatDate(new Date())}.json`
});
await page.goto('http://localhost:4173/performance/diagram');
await page.evaluate(() => window.performance.mark('Perf:Started'));
await page.click('button');
await page.waitForSelector('svg');
await page.evaluate(() => window.performance.mark('Perf:Ended'));
await page.evaluate(() => window.performance.measure('overall', 'Perf:Started', 'Perf:Ended'));
await browser.stopTracing();
});
```
Using Playwright, we can easily extract a chrome trace report. You will see the performance reports accumulate at the specified path.

# **Analyzing the Trace Report**
A Chrome trace report displays all traceEvents that occurred during a specified interval. Importing this into Chrome DevTools allows us to see the execution times of functions. The Chrome trace viewer interprets trace events to show the start and end times of function call stacks. A traceEvent includes the following items:
```tsx
type TraceEvent = {
args: object; // An object containing additional information related to the event
cat: string; // A string representing the event category
name: string; // The name of the event
ph: string; // A string indicating the type of event (e.g., 'M' stands for a Meta event)
pid: number; // Process ID
tid: number; // Thread ID
ts: number; // Timestamp (in microseconds)
dur?: number; // Duration of the event (in microseconds, optional)
tdur?: number; // Duration of the event based on thread time (in microseconds, optional)
tts?: number; // Timestamp based on thread time (in microseconds, optional)
};
```
## **CpuProfile Event**
We need events where the category (cat) is **`disabled-by-default-v8.cpu_profiler`**. These events show how long functions occupy the CPU. Although we cannot determine the event's duration directly, combining the information from **samples** and **timeDeltas** allows us to deduce the execution time for each function.
```tsx
interface ProfileChunk {
args: {
data: {
cpuProfile: {
nodes: Node[]; // Array of node IDs for each sample
samples: number[]; // Array of node IDs for each sample
};
lines: number[]; // Array of source code line numbers for each sample
timeDeltas: number[]; // Array of time differences between each sample (in milliseconds)
};
};
cat: string; // Event category
id: string; // Unique identifier for the profile chunk
name: string; // Event name
ph: string; // Event type (P: ProfileChunk)
pid: number; // Process ID
tid: number; // Thread ID
ts: number; // Timestamp (in microseconds)
tts?: number; // Time since thread start (in microseconds, optional)
};
interface Node {
callFrame: {
codeType: string; // Code type (JS: JavaScript)
columnNumber: number; // Column number in the source code
functionName: string; // Function name
lineNumber: number; // Line number in the source code
scriptId: number; // Script ID
url: string; // Script URL
};
id: number; // Unique identifier of the node
parent?: number; // ID of the parent node (if not a root node)
};
```
**`disabled-by-default-v8.cpu_profiler`** events occur at regular intervals, storing the current call stack location (node id) in **samples** and the time spent in each node in **timeDeltas**. We can infer the call stack traces through the node **id** and **parent**.
To calculate a specific function's execution time:
1. Traverse samples and timeDeltas, calculating the total timeDeltas for each node id.
2. Connect nodes' parent-child relationships, adding children's durations to their parent's duration
### here is implementation
```jsx
class ChromeTraceAnalyzer {
nodes;
constructor(trace) {
this.setConfig(trace);
}
// 함수 이름을 통해 실행시간을 추적합니다. 단위는 밀리세컨드(ms)입니다.
getDurationMs(name) {
if (this.nodes == null) throw new Error('nodes is not initialized');
const result = this.nodes.find((node) => node.callFrame.functionName === name);
return result.duration / 1000;
}
//chrome trace report를 통해 콜스택 트리(nodes)를 형성합니다.
setConfig(trace) {
const { traceEvents } = trace;
const profileChunks = traceEvents
.filter((entry) => entry.name === 'ProfileChunk')
const nodes = profileChunks
.map((entry) => entry.args.data.cpuProfile.nodes)
.flat()
const sampleTimes = {};
profileChunks.forEach((chunk) => {
const {
cpuProfile: { samples },
timeDeltas
} = chunk.args.data;
samples.forEach((id, index) => {
const delta = timeDeltas[index];
const time = sampleTimes[id] || 0;
sampleTimes[id] = time + delta;
});
});
this.nodes = nodes.map((node) => ({
id: node.id,
parent: node.parent,
callFrame: node.callFrame,
children: [],
duration: sampleTimes[node.id] || 0
}));
const nodesMap = new Map();
this.nodes.forEach((node) => {
nodesMap.set(node.id, node);
});
this.nodes
.sort((a, b) => b.id - a.id)
.forEach((node) => {
if (node.parent == null) return;
const parentNode = nodesMap.get(node.parent);
if (parentNode) {
parentNode.children.push(node);
parentNode.duration += node.duration;
}
});
}
}
```
# **Let’s Analyze and Record with Playwright**
Now, let’s analyze the trace with Playwright and record the results. The main functions are **`runApp`**, **`mount`**, **`draw`**, **`layout`**, and **`paint`**.
- **RunApp:** The parent function of **`mount`** and **`draw`**. It encompasses all activities involved in drawing the diagram.
- **Draw:** The parent function of **`layout`** and **`paint`**. It includes tasks that modify the actual DOM.
```jsx
test('Capture analyzed trace when diagram is rendered', async () => {
const COUNT = 10;
const duration = {
timestamp: Date.now(),
runApp: 0,
mount: 0,
draw: 0,
layout: 0,
paint: 0,
note: ''
};
for (let i = 0; i < COUNT; i++) {
const browser = await chromium.launch({ headless: true });
const context = await browser.newContext();
const page = await context.newPage();
await page.goto('http://localhost:4173/performance/diagram');
await browser.startTracing(page, {});
await page.evaluate(() => window.performance.mark('Perf:Started'));
await page.click('button');
await page.waitForSelector('svg');
await page.evaluate(() => window.performance.mark('Perf:Ended'));
await page.evaluate(() =>
window.performance.measure('overall', 'Perf:Started', 'Perf:Ended')
);
const buffer = await browser.stopTracing();
const jsonString = buffer.toString('utf8'); // buffer를 UTF-8 문자열로 변환
const trace = JSON.parse(jsonString); // 문자열을 JSON 객체로 파싱
const analyzer = new ChromeTraceAnalyzer(trace);
duration.runApp += analyzer.getDurationMs('runApp') / COUNT;
duration.mount += analyzer.getDurationMs('mount') / COUNT;
duration.draw += analyzer.getDurationMs('draw') / COUNT;
duration.layout += analyzer.getDurationMs('layout') / COUNT;
duration.paint += analyzer.getDurationMs('paint') / COUNT;
browser.close();
}
console.log('****Execution Time****');
console.log(`runApp: ${duration.runApp}ms`);
console.log(`mount: ${duration.mount}ms`);
console.log(`draw: ${duration.draw}ms`);
console.log(`layout: ${duration.layout}ms`);
console.log(`paint: ${duration.paint}ms`);
console.log('********************');
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const filePath = path.join(__dirname, '../performance-history/duration.ts');
let fileContent = fs.readFileSync(filePath, { encoding: 'utf8' });
fileContent += `histories.push(${JSON.stringify(duration)});\n`;
fs.writeFileSync(filePath, fileContent);
});
```
We will not directly save the trace report but rather the analyzed results. To ensure a uniform testing environment, we create a new browser instance for each performance report. If you repeatedly delete and redraw diagrams without resetting the browser, you can observe runApp's execution time decreasing due to browser optimization behaviors.
### here are results

# **Conclusion**

trace history chart
For sustainable optimization efforts, it's crucial to record performance like this. By setting up CI/CD to fail tests if function execution times exceed certain thresholds, we can track unexpected performance degradations during code modifications. Although currently based on my local machine, plans are in place to standardize the machine environment using Docker.
The scarcity of libraries for analyzing Chrome traces meant I had to develop one myself. While working on the Flitter library and analyzing other metrics like memory allocation, I gained enough knowledge to consider creating a chrome-trace-viewer library. (The existing libraries on npm either do not work well or were released 5–6 years ago.)
You can check the actual code here:
**GitHub**: [https://github.com/meursyphus/flitter/blob/dev/packages/test/tests/tracking-performance.test.ts](https://github.com/meursyphus/flitter/blob/dev/packages/test/tests/tracking-performance.test.ts)
That's all for now.
# **Reference**
- [Performance Testing in Playwright](https://medium.com/@anandhik/performance-testing-in-playwright-64cdef431e2e)
- [Chrome Trace Event Format Docs](https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/preview#heading=h.yr4qxyxotyw)
| moondaeseung | |
1,915,846 | Learn Python - Day 1 | Chapter 1 What python can do? Python is versatile programming language and... | 0 | 2024-07-09T12:34:41 | https://dev.to/dinesh_chinnathurai_136b1/python-learning-59e7 | introduction | ## Chapter 1
## What python can do?
- Python is versatile programming language and known for its simplicity and readability
- Python can do many things like web development, Data analysis and visualization, AI and ML, scripting for automation, Desktop GUI application, database access and so on.
## Why python?
Python language simplicity, versatility, community support, and industry adoption make it a preferred choice for a wide range of applications, and it is now effectively used in the field of Finance, Data visualization, ML and AI.
**Key points**
- Python syntax is easy to understand, and it resembles human language.
- It supports multiple programming concepts like procedural, object-oriented and functional programming.
- It has large active community of developers to support, knowledge sharing, contribute to python open-source projects and so on.
- Python can run in various platforms like windows, Linux, IOS.
- Python is an open-source software (Free to use)
## Python syntax compared to other programming language.
- Python syntax is concise and emphasizes readability with minimal syntactic terms and indentation is used to define the structure of the code, making it clear and organized.
- In other programming languages, the explicit syntax with curly braces to define blocks of code and it requires more syntactic terms are used to code for method definitions class structures
## Python Installation
**windows:**
- Download the latest package from the below link or browse the supported package based on your windows OS.
- https://www.python.org/ftp/python/3.12.4/python-3.12.4-amd64.exe
- Double click exe file and proceed with installation in as per the instructions provided in the installation window and until successful installation
**To verify installation**
Open command prompt and type python, if it installed properly you will see the below information with version details.
**C:\Users\Win11>python**
Python 3.12.4 (tags/v3.12.4:8e8a4ba, Jun 6 2024, 19:30:16) [MSC v.1940 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
**Ubuntu:**
- Open the terminal window
- Run the command "apt-get install python"
- Follow the prompts to complete the installation
- Alternatively, download the Python package for Ubuntu from the official Python website and install it manually.
**To verify installation **
- open terminal
- Type "python -version"
- It shows the python version details
**First python program with print() statement**
- open command prompt in windows and type python
- Then type python command/code - Refer below image

**Note:**
- We can use editor to type code like VS code, sublime, notepad++ etc.
- We have google collab editor to practice python program
https://colab.research.google.com/
| dinesh_chinnathurai_136b1 |
1,915,847 | Soccer Uniform | Soccer outfits, otherwise called football packs or shirts, normally comprise of a few parts intended... | 0 | 2024-07-08T13:29:08 | https://dev.to/alpaca_uniform_ae3e003f4b/soccer-uniform-3beh | Soccer outfits, otherwise called football packs or shirts, normally comprise of a few parts intended to give solace, execution, and group character. Here are the critical components of soccer garbs:
Jersey/Unit:
Plan: The plan of the shirt frequently incorporates the group’s tones, logo, and support logos.
Material: Shirts are typically produced using lightweight, breathable materials, for example, polyester to wick away perspiration and keep players cool.
Fit: The fit is for the most part cozy to the body, considering opportunity of development.
Shorts:
Configuration: Shorts additionally [for more detail]https://alpacaintl.com/soccer-uniforms/
(url) | alpaca_uniform_ae3e003f4b | |
1,915,848 | Why Magic is Superior to Supabase | Supabase is a toy No-Code and Low-Code framework, allowing you to wrap your database in CRUD API... | 0 | 2024-07-08T13:29:39 | https://ainiro.io/blog/why-magic-is-superior-to-supabase | lowcode | [Supabase](https://supabase.com) is a toy No-Code and Low-Code framework, allowing you to wrap your database in CRUD API endpoints in some few seconds.
The problem is that once you need to go beyond CRUD, there are no real alternatives to coding. I've written extensively about the problems originating from this in a [previous article](https://ainiro.io/blog/supabase-versus-magic-you-win), but basically it's a rubbish idea, that looks great on stage when demonstrating stuff - But never actually works in real world solutions due to complex use cases and rich requirements.
In the following video I demonstrate three things you can do with Magic that you cannot do with Supabase.
1. Joins on tables
2. Declaratively adding business logic without coding
3. Manually coding once you need to without having to create _"edge functions"_
{% embed https://www.youtube.com/watch?v=tRo5_V2Hfqk %}
Notice, there's a million things you can do with Magic that you cannot do with Supabase. The above is just a tiny little teaser to give you an idea.
## PostgREST was a bad idea
Supabase is entirely based upon PostgREST. PostgREST was a dumb idea. Sure it solves CRUD for you, but regardless of how many CRUD solutions you've got, you will never be able to deliver working software. The reasons for this is because every app requires custom business logic that goes beyond CRUD. Examples can be for instance
* Write to the log when some item is created
* Send an email when some item is deleted
* Invoke a 3rd party service when some item is updated
* Update multiple records at the same time
* Etc, etc, etc
With PostgREST none of the above is even possible in theory. This culminated in Supabase having a sobering realisation a year ago that they needed to create _"edge functions"_. For the record, adding edge functions to Supabase, is kind of like fixing your car with duct tape and chewing gum.
And once you're in _"edge function land"_ there's no no-code or low-code helping you out. You're basically just exhanging your IDE with a worse implementation in exchange for a CRUD API you'll never be able to actually use for anything intelligent. About the only real value you'll find in Supabase that actually provides any value what so ever, is automatically taking care of authentication and authorisation - But so does every other No-Code and Low-Code framework on the planet - Including Magic!
## What's an Edge Function
Edge functions basically works like an interceptor layer between your PostgREST API and the client, adding one additional network hop, resulting in reducing scalability and making your app slower. In addition, it results in increasing your app's attack surface, making it more likely to be hacked.
And if you want to access your database directly from the edge function, why use Supabase in the first place anyway? After all, it's just a worse implementation of Azure functions and AWS Lambda functions, that you can buy on a per invocation price, resulting in paying 5 cents per month, for something Supabase would charge you $25 for per month.
Fixing these problems, basically implies implementing everything that traditional cloud platform providers have, such as AWS and Azure. For Supabase to deliver a fix for the above problems, implies there's nothing separating them from Azure and AWS anymore, besides possibly price. If price is an issue though, you can buy a droplet at DigitalOcean for $6 per month, allowing you to host hundreds of databases and thousands of web APIs.
And while we're at at, I bet you can find Docker images you can deploy to your droplet wrapping [OData](https://github.com/odata), allowing you to create your own Supabase in your own droplet, accessing MS SQL, MySQL, Oracle, and literally every single RDBMS that ever existed - And do so in some 5 minutes, equally fast as you can register at Supabase's website. However, ask yourself why you'd want to do that. OData might be less crappy than PostgREST, but it's still crap ...
Facts are, PostgREST was a bad idea. Supabase was _founded_ on PostgREST. This is because they had no idea about real world requirements, because of a lack of experience with software development. Basically, they were a bunch of junior devs, thinking they had found something cool - Which is an oxymoron by itself once you realise that solutions such as PosgREST has been around since the late 1980s.
> It was a bad idea in 1980, and it's still a bad idea in 2024 - But it looks great on stage, because you can rapidly create a ReactJS client, wrapping your CRUD API, providing high _"bling factor"_
Psst, GraphQL suffers from _the exact same problem_ - As in, no business logic, no working app!
## Magic, the fix
If you want a _real_ low-code and no-code solution for your frontend, there's always [Magic](https://docs.ainiro.io/). It doesn't look as cool on stage, and probably not in videos either - But at least _it works_! And yes, we're charging 11 times as much as Supabase, but considering it's 1,000 times better, it's still a bargain.
And once you realise that we're actually a _real_ company, already profitable, not having to worry about your PaaS vendor running out of money, resulting in your app getting rug pulled - I'd say it's easily worth the $298 per month we're charging for a cloudlet.
Don't fall for hype please. Supabase is _based_ upon hype. But once you take away the hype, there's literally nothing left there. Supabase is not low-code or no-code, it's a 10 orders of magnitude worse IDE than what you're currently using. It's also 10x more expensive than Azure functions and AWS Lambda Functions, and provides nothing more than either of the two previously mentioned constructs. In addition, they're burning millions of dollars per month, possibly making them go belly up in a year due to running out of funding. And if you really want the PostgREST crap, OData is probably 1,000 times better.
> But they do have a nice website, I'll give them that 😊
If you're interested in discussing real working low-code and no-code solutions with us, feel free to contact us below.
* [Contact us](/contact-us)
| polterguy |
1,915,849 | Data Storytelling Là Gì? Cách Dữ Liệu Nói Lên Câu Chuyện | Terus Digital Marketing trực thuộc Terus là đơn cung cấp đơn vị cung cấp giải pháp số toàn diện. Phục... | 0 | 2024-07-08T13:30:56 | https://dev.to/terus_digitalmarketing/data-storytelling-la-gi-cach-du-lieu-noi-len-cau-chuyen-2if3 | webdev, terus, teruswebsite, website | Terus Digital Marketing trực thuộc Terus là đơn cung cấp đơn vị cung cấp giải pháp số toàn diện. Phục vụ chủ yếu mọi đối tượng kinh doanh tại HCM & toàn quốc. Với kinh nghiệm lĩnh vực [dịch vụ SEO Tổng Thể Website Nâng Cao Thứ Hạng, Tối Ưu Chi Phí](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/) trong đó rất nhiều dự án lớn nhỏ đã và đang thành công chúng tôi luôn hướng tới sự phát triển bền vững và mối quan hệ cộng tác lâu dài với khách hàng. Sau đây, Terus Digital Marketing sẽ giới thiệu cho bạn về data storytelling
Data storytelling có thể coi là nghệ thuật kể chuyện trong ngành tiếp thị hướng tới khách hàng mục tiêu. Để viết một câu chuyện thương hiệu thành công, người viết phải có hiểu biết sâu sắc về sản phẩm hoặc dịch vụ của công ty và khả năng sử dụng lối viết dễ nhớ và sống động.
Data storytelling cho phép bạn truyền tải thông điệp câu chuyện của mình, gây ảnh hưởng và cung cấp thông tin cho các đối tượng cụ thể. Data storytelling đóng vai trò quan trọng trong việc truyền tải thông tin và tạo ảnh hưởng cho doanh nghiệp, cụ thể:
1. Làm cho dữ liệu của bạn hấp dẫn và dễ nhớ đối với khán giả: Thay vì chỉ trình bày các con số và biểu đồ, data storytelling giúp chuyển đổi dữ liệu thành những câu chuyện sinh động, dễ hiểu và dễ nhớ hơn.
2. Để thu hút sự chú ý: Trong thời đại thông tin tràn ngập, việc thu hút sự chú ý của khách hàng trở nên vô cùng quan trọng. Data storytelling giúp doanh nghiệp thu hút sự chú ý của khách hàng thông qua các câu chuyện hấp dẫn và có sức thuyết phục.
3. Sử dụng tất cả dữ liệu có được: Thay vì chỉ sử dụng một phần dữ liệu, data storytelling giúp doanh nghiệp tận dụng toàn bộ dữ liệu có được để tạo ra những câu chuyện thú vị và có ý nghĩa.
4. Data storytelling truyền tải thông điệp nhanh chóng: Thay vì phải giải thích dài dòng, data storytelling giúp doanh nghiệp truyền tải thông điệp cốt lõi một cách nhanh chóng và hiệu quả hơn.
5. Giúp khách hàng dễ dàng lựa chọn: Thông qua data storytelling, doanh nghiệp có thể giúp khách hàng hiểu rõ hơn về sản phẩm, dịch vụ của mình, từ đó dễ dàng lựa chọn phù hợp với nhu cầu.
Các phương pháp Data storytelling:
1. Bắt đầu bằng một câu chuyện: Thay vì trình bày trực tiếp các con số, dữ liệu, hãy bắt đầu bằng việc kể một câu chuyện có liên quan. Câu chuyện này sẽ giúp lôi cuốn người nghe và tạo sự liên kết cảm xúc.
2. Biến hóa dữ liệu thú vị hơn: Sử dụng các công cụ trực quan hóa như biểu đồ, infographic, hình ảnh để chuyển tải dữ liệu một cách sinh động, dễ hiểu hơn. Các yếu tố này sẽ giúp câu chuyện trở nên hấp dẫn và khó quên hơn.
3. Tích hợp các câu chuyện và dữ liệu vào các bối cảnh cụ thể: Thay vì trình bày những con số, dữ liệu khô cứng, hãy tìm cách kết nối chúng với những bối cảnh, tình huống cụ thể mà khách hàng có thể dễ dàng liên hệ và cảm nhận được.
4. Thúc đẩy sự kết nối: Thông qua việc kể câu chuyện dựa trên dữ liệu, doanh nghiệp sẽ xây dựng được mối liên hệ sâu sắc hơn với khách hàng, giúp họ cảm thấy được quan tâm và hiểu rõ hơn.
5. Thấu hiểu đối tượng mục tiêu: Để data storytelling thực sự hiệu quả, doanh nghiệp cần hiểu rõ về đối tượng mục tiêu, từ nhu cầu, mối quan tâm cho đến cách họ tiếp nhận thông tin. Điều này sẽ giúp doanh nghiệp thiết kế các câu chuyện dữ liệu phù hợp và kết nối tốt hơn.
6. Điểm khác biệt rất quan trọng với Data storytelling: Sự độc đáo của thương hiệu được lồng ghép trong câu chuyện của bạn sẽ giúp bạn tạo ấn tượng mạnh mẽ hơn với đối tượng mục tiêu so với những câu chuyện đại chúng hiện nay.
Để áp dụng data storytelling hiệu quả, doanh nghiệp cần chú ý đến các bước như: xây dựng câu chuyện, biến hóa dữ liệu, tích hợp với bối cảnh cụ thể, thúc đẩy sự kết nối, hiểu rõ đối tượng mục tiêu và tạo ra sự khác biệt. Các hình thức như giao tiếp nội bộ, dashboard, infographic, báo cáo và newsletter cũng là những kênh để doanh nghiệp triển khai data storytelling.
Tìm hiểu thêm về [Data Storytelling Là Gì? Cách Dữ Liệu Nói Lên Câu Chuyện](https://terusvn.com/digital-marketing/data)
Các dịch vụ khác tại Terus:
Digital Marketing:
* [Dịch vụ Chạy Facebook Ads Tiếp Cận Khách Hàng Tiềm Năng](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
* [Dịch vụ Chạy Google Ads Tối Ưu Chi Phí, Nâng Cao Hiệu Quả](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
Thiết kế Website:
* [Dịch vụ Thiết kế Website Giao Diện Độc Đáo, Phù Hợp Với Mọi Ngành Nghề](https://terusvn.com/thiet-ke-website-tai-hcm/)
| terus_digitalmarketing |
1,915,850 | 2024's Must-Play Free Arcade Games for Every Gamer | Freearcadegames.net has a wide array of free online arcade games that are appropriate for any age.... | 0 | 2024-07-08T13:32:18 | https://dev.to/corazon_5e82e1043ad8b3a14/2024s-must-play-free-arcade-games-for-every-gamer-igd | Freearcadegames.net has a wide array of free online arcade games that are appropriate for any age. Freearcadegames.net presents all sorts of HTML5 games, which have classic arcade hits and thrilling new releases. At Freearcadegames.net, you will never get bored since there is plenty to choose from be it action packed shooters, puzzle games or nostalgic retro titles because everything can be found here without downloading anything. With this, you can instantly start playing your beloved arcade game by joining our gaming community!
Hashtags: FreeArcadeGames, OnlineGaming, ArcadeGames, FreeGames, BrowserGames, PlayGamesOnline, HTML5Games, FunGames, ArcadeGameCollection, FreeOnlineGames, AddictiveGames, GamingCommunity, PlayFreeGames, GamePortal, FamilyFriendlyGames, NewGames, GameWebsite, GamingHub, ArcadeFun, InstantGames, PlayNow, FreeOnlineArcade | corazon_5e82e1043ad8b3a14 | |
1,915,851 | 15 GIT Questions Will Make You Better Developer | Hi coders, welcome to our new article. Today, we are going to talk about most important and popular... | 0 | 2024-07-08T13:32:52 | https://dev.to/mammadyahyayev/15-git-questions-on-stackoverflow-will-make-you-better-developer-4bo7 | git, softwareengineering, github | Hi coders, welcome to our new article. Today, we are going to talk about most important and popular Git questions asked on Stackoverflow. You need to be great at Git, because it is highly used among entire IT industry.
## Introduction
It doesn’t matter you are a backend, or frontend developer, if you are writing code or teaching something, learning Git will increase your productivity. Thus, give fairly amount of time to learn Git.
Most developers especially newbie programmers strugles to work with Git in some scenarios. When you stuck on something, you always search the solution on Stackoverflow.
Therefore I decide to collect most asked questions from Stackoverflow in one article. If you aren’t stuck on some git problems before, trust me one day, they will coming for you, therefore be ready to fight.
Learn those concepts to avoid time consuming operations.
## What is the Difference Between git pull and git fetch?
Presumably pull command is one of the most commonly used command in your career. In order to use it properly, you need understand what is the git pull and what is the difference between pull and fetch. There are really great visual explanation which help to understand concept better.
[Read more](https://stackoverflow.com/questions/292357/what-is-the-difference-between-git-pull-and-git-fetch)
## How to Undo Recent Local Commits?
Sometimes you make commits, then you realize, you need to go back and try something different. Undoing the actions is always dangerous if you are working with a big team.
In this question, some people have showed different options to undo recent commits. Read about them and try on your own and choose one of them for your situation.
[Read more](https://stackoverflow.com/questions/927358/how-do-i-undo-the-most-recent-local-commits-in-git)
## How to Delete a Git Branch Locally and Remotely?
Branches are really great way to work individually in a big team. Even if you are working by yourself, I recommend you to work with branches, because it is easy to track everything separately.
After you complete your task, you merge the changes into the main branch, and you don’t need those branches anymore. In order to delete them completely read the different answers of numerous people.
[Read more](https://stackoverflow.com/questions/2003505/how-do-i-delete-a-git-branch-locally-and-remotely)
## How to Remove Untracked Files from Git?
If you are really like to work with terminal while manipulating your files, untracked files will annoy you.
Let’s assume, you want to add your changes, if you type git add, the command will add all the files include untracked ones into stage area. However you only want to deal with changed files. Thus you need to get rid of untracked files. In order to do it properly, read the answers.
[Read more](https://stackoverflow.com/questions/61212/how-do-i-remove-local-untracked-files-from-the-current-git-working-tree)
## How to Modify Last Commit Message?
Writing a good commit message always takes some time to think and give a good message that describe what you did in the commit. If you did a lot of changes, writing a good commit message will become arduous task. Therefore stick this strategy, commit small and meaningful changes. I remember, I changed the commit message 5 times in a row. At the end I found the most meaningful message for the commit.
If you are not allowed to force push to master branch, create a separate branch to work individually, if you made a mistake don’t worry you can change it because it belongs to you.
But this depends on the company preferences. if you don’t know how to change commit message, refer to this link.
[Read more](https://stackoverflow.com/questions/179123/how-to-modify-existing-unpushed-commit-messages)
## How to Revert to a Specific Commit?
Walking along the commit history is necessary especially when you are working with a team. Keeping commit history as straight line is important as writing code. If the commit history contains multiple mixed lines that development phase will be difficult.
Give a great amount of time to build a fancy commit history. Read every answer in the question in order to be good at walking along the commit history.
[Read more](https://stackoverflow.com/questions/4114095/how-do-i-revert-a-git-repository-to-a-previous-commit)
## How to Move Commits into Another Branch?
Sometimes I realized I created several commits but these commits should belong to another branch. In this case moving your commits into another branch is a sensible operation, because there is a possibility that you can harm something.
You might say, why you are talking about you need to be very attentive. Perhaps, you aren’t face these problems on your own, but in big teams, possibility of occurring these problems is very high. If you harm something, this may affect other people's work as well.
Therefore always be attentive while working with Git. Thus create a separate branch work individually in a big team.
[Read more](https://stackoverflow.com/questions/1628563/move-the-most-recent-commits-to-a-new-branch-with-git)
## What is the Difference Between git merge and git rebase?
In my point of view, merge and rebase is the most important concepts in Git. These concepts can affect a lot of things, therefore give much attention than before to learn these concepts. If you want to know when to use which one, read every answer in the question.
[Read more](https://stackoverflow.com/questions/16666089/whats-the-difference-between-git-merge-and-git-rebase)
## How to Resolve Merge Conflicts?
Merge conflicts is another thing that you need to be aware of its sensitivity. If you are working in a team, always contact with your team mates to solve this problem. You have to make sure which changes you accept: local or incoming changes.
You can use your terminal to get rid of the conflicts but in this situation using text editor or any IDE will be appropriate case. In this question people gave the tool which you can use to resolve the conflicts.
[Read more](https://stackoverflow.com/questions/161813/how-do-i-resolve-merge-conflicts-in-a-git-repository)
## How to Remove a File from Git Repository Without Deleting it from Local File System?
While coding, I like to create a txt file and add some to-do lists or put ideas about the project in that file. Sometimes I forget to add it to the .gitignore file, which causes the file to be added to the remote Git repository, which is not what I want.
If you want to keep that file in your local system, but remove it from remote repository, use the following command.
```shell
git rm --cached file.txt
```
It’s very simple, but read the answers and try to implement it on your own.
[Read more](https://stackoverflow.com/questions/1143796/remove-a-file-from-a-git-repository-without-deleting-it-from-the-local-filesyste)
## How to Commit Only Particular Part of the File?
While I writing the code on single file, sometimes each change belongs to a feature, which in this case, it will be more convenient to commit them separately. This is called [interactive staging](https://git-scm.com/book/en/v2/Git-Tools-Interactive-Staging) in Git.
In order to do this you can use the terminal, but I prefer to use [Sublime Merge](https://www.sublimemerge.com/). Because terminal can cause some troubles in that case.
Nowadays, I use interactive staging a lot, both at work and in my personal projects. I believe you will find it very useful. Read the answers one by one, and practice it a lot. Because it is a little bit difficult to do.
[Read more](https://stackoverflow.com/questions/1085162/commit-only-part-of-a-files-changes-in-git)
## How to List All Files in a Commit?
This can be useful if there are multiple files in a commit. However, I recommend you to not commit lots of files at once. Maybe it isn’t very common thing, but it is great to read.
There are several answers which explain everything in detail. Visit the link and start read them.
[Read more](https://stackoverflow.com/questions/424071/how-do-i-list-all-the-files-in-a-commit)
## What Does cherry-picking a Commit with Git Mean?
Do you know what cherry pick is?. In simply, it allow you to select specific commits from a branch and apply it to another. To be honest, I have never used it before. You can read from the official documentation as well as on Stackoverflow.
[Read more](https://git-scm.com/docs/git-cherry-pick)
## How to see the differences between two branches?
In order to see the difference between two branches or two commits using an IDE or text editor is a good choice. But sometimes, I prefer to use terminal, if there are less changes between branches or commits.
[Read more](https://stackoverflow.com/questions/9834689/how-can-i-see-the-differences-between-two-branches)
## How to Make Pretty Git Graphs?
This is the thing I used most, maybe 40 or 50 times in a day. You can print the commit history in a beatiful format in order to see it clearly. I use following command for this case.
```shell
git log --oneline --graph
```

But in development, typing this command takes some time, therefore I created an alias for this and called it flog. So I can do the same thing by using following command.
```shell
git flog
```
**flog** means _formatted log_. This is the name I gave, of course you can give any name you want.
If you want to create your own alias, take a look at the following command.
```shell
git config --global alias.flog "log --oneline --graph"
```
This command is enough for my use cases. But in the question, there are lots of other ways to customize log history.
[Read more](https://stackoverflow.com/questions/1057564/pretty-git-branch-graphs)
## Conclusion
That’s it for this article. I hope you enjoyed and learned a lot of things.
If you have any questions or ideas drop a comment or feel free to reach me via [LinkedIn](https://www.linkedin.com/in/mammadyahya/).
See you soon in upcoming articles.
| mammadyahyayev |
1,915,854 | AI and Data Privacy: Balancing Innovation and Security in the Digital Age | Artificial intelligence is ubiquitous nowadays, from those eerily accurate music recommendations to... | 0 | 2024-07-08T13:39:35 | https://dev.to/digitalsamba/ai-and-data-privacy-balancing-innovation-and-security-in-the-digital-age-hoh | ai, privacy, webdev, security | Artificial intelligence is ubiquitous nowadays, from those eerily accurate music recommendations to robots operating colossal factories. However, all this remarkable AI technology relies on extensive datasets to make decisions, and this data includes our personal information!
The pressing question is, how can we harness the power of AI while safeguarding our data? Worry not, as this article will shed light on the potential threats AI poses to data privacy. We will delve into how AI's insatiable appetite for information can expose your details, raising concerns about who truly controls your data and how it may be utilised. Furthermore, we will examine the potential risks to your privacy in this swiftly evolving AI landscape and share strategies to protect yourself.
## Understanding [AI and data privacy](https://www.digitalsamba.com/blog/data-privacy-and-ai)
You've probably encountered AI at work without even realising it. Think of those smart virtual assistants that can understand and respond to your voice commands, or the customer service chatbots that handle your queries in real-time. Even automated content writing tools fall under the AI umbrella!
But have you ever pondered what powers these nifty AI capabilities? The answer is data—vast, immense amounts of data that AI systems analyse to detect patterns and "learn".
We're not just referring to generic data here. Much of the information feeding AI comes directly from our digital footprints—the websites we visit, the items we purchase online, our geographic locations, and much more. In essence, it's personal data about you and me.
Can you see the potential issue? AI relies on this intimate user data to provide its intelligent functionality, which brings us to the realm of data privacy—our ability to control how our personal details are collected, shared, and used by companies.
Should we halt using AI or refrain from contributing to its development because it requires substantial data for training? Certainly not! AI offers us significant convenience, so we need to find a way to balance AI and data privacy. There are solutions like data anonymisation, which effectively removes any personal details from the information AI uses. Additionally, maintaining robust security measures to safeguard our data helps prevent information breaches. We will explore these in more detail in the following sections.
As AI continues to evolve, so will the regulations around data privacy. It's crucial to understand this connection so we can create a future where everyone enjoys the benefits of AI while retaining control over their personal information.
## AI data collection methods
AI programmes require an incredible amount of information to train on. But how exactly do they gather all this data? Let's explore some of the most common methods used to feed an AI's knowledge base:
### Web scraping
The internet is a vast treasure trove of information, and websites and social media are brimming with valuable nuggets! This is where a technique called web scraping comes in. It's like having super-powered assistants for AI systems. Web scraping uses special programmes, akin to super-fast readers, that can automatically scan websites and social media platforms. These programmes, also called bots, sift through all this online content and extract specific elements, such as text, pictures, videos, and even the hidden code that makes websites function. For instance, if an AI wanted to understand online conversations, it could use web scraping to gather all the public posts and comments on a particular topic. Quite neat, right?
### Sensor data
Consider all the tech gadgets in your daily life: smartphones you carry everywhere, fitness trackers monitoring your every step, smart doorbells keeping an eye on your porch—even your fridge might be collecting data! These gadgets often have sensors that constantly gather information. They track things like your location, the temperature in your house, the sounds you make, and even your level of activity. This constant stream of data is a goldmine for AI systems, giving them a real-time view of human behaviour and surroundings. Imagine a city using AI to optimise traffic flow. It could analyse sensor data from traffic cameras and connected cars to understand current traffic patterns!
### User data
Ever wondered how those apps and websites you love keep getting better at suggesting things you might like? It's as if they can read your mind! Well, not quite, but they do learn by observing your usage patterns. These AI systems track your searches, the websites you visit, and even the things you buy online. Usually, this data collection happens with your permission (remember all that fine print you skimmed through?). But it's always good to be aware of the data trail you're leaving behind!
### Crowdsourcing
Even super-smart AI sometimes needs human judgement for certain tasks. That's where crowdsourcing comes in. Think of it as a giant online team-up! Special platforms connect AI companies with everyday people who can tackle mini-tasks to help the AI learn. Imagine this: thousands of people around the world working together to teach an AI the difference between a fluffy cat and a playful pup, all by labelling pictures!
### Public datasets
It's a collaborative world in AI; researchers and companies often release valuable datasets publicly. These are essentially massive topic-based data collections, like AI cookbooks. Universities, governments, and online communities all create datasets for areas such as language, computer vision, scientific research, and more.
### Data partnerships
Stuck trying to find the missing piece for your AI project? Data partnerships are like recipe swaps for the AI world! Companies can collaborate with other businesses, labs, or even government agencies to access special datasets they might hold. It's essentially sharing unique ingredients that no one else has. By working together and sharing this data, everyone can develop even more amazing AI!
### Synthetic data
What if the data you need just doesn't exist or is too costly or unethical to obtain? Synthetic data generation uses special AI techniques to manufacture realistic artificial data when real-world collection isn't feasible. It's like having a magic kitchen to cook up any data ingredient!
## Privacy challenges in AI data collection and usage
The data collected and used by AI models can pose serious challenges to our privacy. Here are some of the key issues:
### Data exploitation
Think about all the personal data—photos, videos, social media posts, and more—being vacuumed up to train these AI models. The issue? We often don't fully grasp how this information is used or if we agreed to it being used in that way. This creates a privacy minefield, raising serious ethical questions.
### Biased algorithms
If the data used to "teach" an AI system is biased or skewed in any way, you can bet your bottom dollar that the AI will pick up on those same prejudices. The end result could mean certain groups or individuals facing unfair treatment based on race, gender, location, and other factors. So much for artificial "intelligence" acting ethically, right?
### Lack of transparency
Have you ever tried to untangle the inner workings of a complex computer program? It's a nightmare! Well, many AI systems operate pretty much as impenetrable black boxes. We have no transparent way to see how our data is being leveraged behind the scenes. That lack of insight means we have zero control over our private information and how it gets used. Don't we deserve to know what's actually going on?
### Surveillance and monitoring
The increasing use of AI in surveillance raises some serious privacy concerns. We're talking about scarily powerful facial recognition technology that can track your every move in public spaces. When AI is conscripted into monitoring online behaviour, recognising faces, or even trying to "predict" criminal activity, it gives rise to chilling questions about mass surveillance and violating privacy rights.
### Data breaches and misuse
Last but not least, the massive data pools used to train and develop AI systems are irresistible targets for cyber attackers and data breaches. A successful heist could potentially expose reams of our most sensitive personal information to bad actors looking to exploit it. Or that leaked data could be misused in ways we never intended when we (maybe) agreed to have it collected in the first place.
## Regulatory frameworks for AI and data privacy
To make AI development safe and protect our data privacy, various regulatory frameworks exist. Here are some of the most prominent data privacy frameworks:
### The General Data Protection Regulation (GDPR)
A few years back, those pesky privacy policy updates started popping up on every website. They were a nuisance at first, but they signalled a crucial shift in how companies handle our personal data in our digital lives. Europe's landmark General Data Protection Regulation (GDPR) kicked off this new era of data transparency and user control. Companies could no longer bury their shady data practices in dense legalese. The GDPR forced them to lay it all out, giving us the power to access data profiles about ourselves, correct mistakes, and even demand complete deletion if we felt uncomfortable.
While the GDPR didn't directly target AI, the principles of openness and individual data rights it established are vital guardrails as machine learning capabilities advance at a blistering pace. After all, these AI systems feed on massive troves of our personal data—browsing habits, social posts, purchases, and more.
### The California Consumer Privacy Act (CCPA)
Seeing the European shift, California quickly followed suit with its own Consumer Privacy Act (CCPA). Like the GDPR, it empowers Californians to easily see companies' data files on them. But it goes further, letting residents opt out of having those valuable data profiles sold to shady third-party brokers and advertisers without consent. No more backdoor profiteering from our digital lives.
As AI applications become increasingly intertwined with our apps and services, robust data privacy laws like the CCPA help ensure the technology develops responsibly and ethically, especially when Californians' personal information is involved.
### The Algorithmic Accountability Act (proposed)
Apart from the GDPR and CCPA, there are broader efforts underway to keep unchecked AI from running rampant. The proposed federal Algorithmic Accountability Act could finally compel companies to rigorously assess their AI systems for discriminatory biases before unleashing them into the wild.
Think about it: We're entrusting more and more critical decisions to machines like hiring, loan approvals, and criminal risk assessments. We can't have these AI overlords unfairly denying people jobs, mortgages, or freedoms based on racism, sexism, or other insidious prejudices hard-coded into their flawed algorithms.
The Act would require companies to implement stringent bias testing and document processes to ensure their AI follows ethical, non-discriminatory practices. No more hand-waving audits or reckless corner-cutting when human rights are at stake.
### The Organisation for Economic Cooperation and Development's (OECD) AI Principles
The OECD AI Principles advocate for core principles around responsible, trustworthy AI development. Their framework emphasises keeping humans involved at every stage rather than ceding total control to machines.
It also crucially mandates transparency; we must be able to understand how AI systems arrive at decisions and hold both companies and individuals accountable for violations or harm caused. The stakes are too high in fields like healthcare and criminal justice to have AI operating as an inscrutable black box.
### The National Institute of Standards and Technology (NIST) AI Risk Management Framework
Even the US government recognises the need to monitor AI closely. Experts at the National Institute of Standards and Technology (NIST) developed a special plan to help companies assess the risks of their AI systems. This framework guides companies in considering safety, security, privacy, and potential biases.
Instead of just releasing any AI system to the public, this plan ensures companies carefully map out where their data comes from, scrutinise their AI's decisions, and test how it would handle real-world situations. They also ensure there is a way to monitor the AI to confirm it works correctly. Only after this rigorous process can an AI system be considered safe and ready for public use.
## Strategies for mitigating AI data privacy risks
AI is a powerful tool, and walking away from it isn't the solution to data privacy concerns. The good news? There are smart strategies we can employ to reduce the risks and keep personal information secure while still tapping into AI's incredible potential benefits. Here are some key approaches for safeguarding data privacy as AI continues to evolve:
- Privacy by design. Imagine building a new house and ensuring the security system is installed and the doors and windows are reinforced from the outset. Privacy by design is similar—it involves incorporating data privacy protections into the core of AI systems from the very beginning of development. By embedding these safeguards into the foundation rather than adding them later, organisations can minimise the chances of data breaches or misuse of sensitive personal information.
- Data minimisation. AI can be a data hog, consuming vast amounts of information to learn and operate. However, just as you wouldn't brew an entire pot of coffee for one cup, AI doesn't always need access to everything about you. Data minimisation involves using only the essential personal data required for a specific AI application or analysis. This approach prevents unnecessary collection and storage of your data.
- Data anonymisation and pseudonymisation. Sometimes, using personal information to train AI models is unavoidable. In such cases, anonymisation and pseudonymisation can provide crucial privacy protection. Anonymisation removes all personally identifying details from the data, making it impossible to trace it back to you. Pseudonymisation takes it a step further by replacing your personal information with random codes or aliases, effectively masking your identity. These techniques add an extra layer of protection to ensure your private information remains confidential.
- Transparency and explainability. Dealing with a decision-maker who provides no explanation for their choices can be incredibly frustrating. We cannot allow AI to operate as a mysterious black-box. Transparency and explainability efforts focus on understanding how AI reaches its conclusions using our data. With transparency, you gain a clear view of what data was input, how it was analysed, and what led the AI to produce a particular outcome. This openness ensures you know exactly how your data is being used and the logic behind AI decisions that affect you.
- Strong security measures. Just like anything housing valuable information, AI needs robust security safeguards. This means implementing encryption to scramble data, strict access controls to regulate who can view what, and regular security audits to identify and address vulnerabilities. By adopting these robust precautions, organisations can create a virtual Fort Knox to keep personal data secure.
By integrating these strategies, we can enjoy the benefits of AI while significantly mitigating the risks to our data privacy.
## How Digital Samba revolutionised video conferencing with privacy-focused AI integration
Video calls have revolutionised remote communication, but what if we could take it even further? At Digital Samba, we've done just that with our innovative video conferencing platform that integrates cutting-edge, privacy-focused AI capabilities. These next-level features streamline collaboration while prioritising user privacy.
One of our standout features is real-time AI captioning during meetings. This advanced technology transcribes every spoken word instantly, making meetings far more inclusive for deaf and hard-of-hearing participants, people in noisy environments, or anyone who needs an easy way to recap later. Unlike those frustratingly inaccurate automatic captions, our AI captioning is highly precise, and these transcripts feed into our summary AI. This means you can do more than just review conversations; you can get a concise analysis of the key points discussed, helping you stay on top of action items and next steps.
Unlike some video conferencing platforms that use meeting data to train their AI and glean marketing insights, we prioritise user privacy above all else. Your data remains yours, always. Our real-time AI captioning operates entirely on our secure servers located within the EU, contrasting with other platforms that rely on US-based cloud platforms. We are fully GDPR-compliant, ensuring we never use or store any of your data without your explicit consent and adhering strictly to regulatory guidelines. As an EU company, we guarantee that all your data stays within the EU.
With Digital Samba's video platform, you gain all the collaborative superpowers of AI while ensuring your personal information and meeting privacy are safeguarded.
## Conclusion
The transformative potential of AI is undeniable, but it comes with a massive responsibility to protect privacy. Striking that balance is essential. Unlocking AI's full capabilities ethically demands robust data protection, development guided by clear moral principles, and fair but firm regulation. Ensuring individuals have true control over their personal information is crucial for building public trust in AI technologies.
But we've got this. Policymakers, tech companies, and we, the regular users, have the power to harness AI's potential for good while ensuring privacy remains a sacred right in our data-driven society. No shortcuts.
Don't get left behind in the AI revolution. Supercharge your apps and websites with Digital Samba's next-level AI-powered video conferencing that's sleek, powerful, and, most importantly, takes privacy seriously. [Sign up today](https://dashboard.digitalsamba.com/signup) and get 10,000 free monthly credits! | digitalsamba |
1,915,855 | Exploring the React Compiler: A detailed introduction | Written by David Omotayo✏️ React has revolutionized how developers build user interfaces since its... | 0 | 2024-07-09T14:04:24 | https://blog.logrocket.com/exploring-react-compiler-detailed-introduction | react, webdev | **Written by [David Omotayo](https://blog.logrocket.com/author/davidomotayo/)✏️**
React has revolutionized how developers build user interfaces since its inception, with every iteration providing innovative and powerful ways for creating dynamic, component-based applications.
Despite its many strengths, React has traditionally lacked a dedicated compiler compared to frameworks like Vue and Svelte. This has forced developers to use Hooks like `useMemo` and `useCallback` to optimize performance and manage re-renders.
React 19, the latest iteration of the library, addresses this need for simpler performance optimization with the new React Compiler! This innovation promises to streamline frontend development with React by eliminating the need for manual memoization and optimization.
In this article, we will explore what the React Compiler is, how it works, and the benefits it brings to frontend development.
## What is a compiler?
Considering some developers might have exclusively used React throughout their frontend development journey, the term "compiler" might be alien. Therefore, it might help to start with a brief introduction to compilers in general.
While I'm not suggesting a general lack of knowledge about compilers among React developers, it's important to differentiate between traditional compilers used in programming languages and those found in web frameworks. Let’s explore how they differ and the specific functionalities of each compiler type.
### Traditional compilers
Traditional compilers are designed to translate high-level programming languages like C, Java, or Rust into lower-level machine code that can be executed by a computer's hardware.
The compiler goes through a series of phases, such as analysis, optimization, and code generation. Then, it links the machine code with libraries and other modules to produce the final executable binary that can be run on a specific platform.
### Compilers in web frameworks
Web framework compilers, on the other hand, are designed to transform declarative component-based code into optimized JavaScript, HTML, and CSS that can be run in a web browser. Although every framework compiler handles the compilation process differently, they generally go through the same phases:
* **Template parsing**: The template part of the component (HTML-like syntax) is parsed into a syntax tree
* **Script parsing**: The script part (JavaScript) is parsed to understand the component logic
* **Style parsing**: The style part (CSS) is parsed and scoped to the components
* **Code transformation**: Every framework’s compiler handles the code transformation phases differently. For example, Vue converts the template into render functions, while Svelte compiles the template and script directly into highly optimized JavaScript
* **Optimization:** Performs optimizations like static analysis, tree shaking, and code splitting
* **Code generation**: Generates the final JavaScript code that can be executed in the browser
* **Output**: The output is typically JavaScript code (along with HTML and CSS) that is ready to be included in a web application and run in the browser
## The need for a compiler in React
Now, you might be wondering: If React doesn't have a compiler, how does it handle the concepts discussed earlier, since they seem essential to any web framework? The answer lies in React's core mental model, which we need to understand first before we dive into the intricacies of the React Compiler.
At its core, React uses a declarative programming model. This model lets you describe how the UI should look based on the current application state. Instead of detailing the steps to manipulate the DOM and update the UI (imperative programming), you specify the desired outcome, and React takes care of updating the DOM to match that outcome.
Take the following component, for example:
```javascript
function ReactComponent() {
return (
<div>
<h1>Hello, World!</h1>
</div>
);
}
```
On initial render, this snippet declares that when `ReactComponent` is rendered, it should produce a `div` containing an `h1` element with the text `"Hello, World!"`.
Now, let's suppose this component has a state and accepts props from a parent component:
```javascript
function ReactComponent(props) {
const [state, setState] = useState(null);
return (
<div>
<h1>Hello, World!</h1>
</div>
);
}
```
In the case where the state or props of this component change, React undergoes a reactive process to re-render the component. This ensures its declarations are always up to date with the current state.
This process is handled through React's reconciliation, which determines the minimal number of changes needed to update the UI to match the new state.
During reconciliation, React uses an in-memory representation of the UI called [the virtual DOM](https://blog.logrocket.com/virtual-dom-react/). It marks components needing updates and uses a diffing function to efficiently identify the changes between the old and new virtual DOM before updating the real DOM.
The bottom line is that React is designed to re-render. As the name suggests, components are re-rendered whenever their state changes to keep the application's UI in sync with the underlying state.
### Re-rendering problems in React
As explained in the previous section, React was designed to re-render. However, this can become a problem because React not only re-renders a component when its state changes, but also re-renders every component inside it and the component inside that component until it reaches the end of the component tree.
In the following example, when the button is clicked and the `message` state updates to `"Hello, React!"`, both the `childComponent` and `AnotherChildComponent` components will re-render. This happens even though the `AnotherChildComponent` component doesn’t use the `message` prop:
```javascript
import React from 'react';
function AnotherChildComponent() {
return <p>Another child component</p>;
}
function ChildComponent({ message }) {
return <p>{message}</p>;
}
export function ParentComponent() {
const [message, setMessage] = useState('Hello, World!');
const updateMessage = () => {
setMessage('Hello, React!'); // This will cause ChildComponent and AnotherChildComponent to re-render.
};
return (
<div>
<AnotherChildComponent />
<ChildComponent message={message} />
<button onClick={updateMessage}>Update Message</button>
</div>
);
}
```
This is not necessarily bad, but but if it happens too often or one of the components downstream is heavy — i.e., has complex computation — it can severely affect the app's performance.
Consider this example:
```javascript
import React, { useState } from 'react';
// A function to simulate an expensive computation
function expensiveComputation(num) {
console.log('Running expensive computation...');
let result = 0;
for (let i = 0; i < 1000000000; i++) {
result += num * Math.random();
}
return result;
}
function HeavyComponent({ number }) {
// Performing the expensive computation directly in the render
const result = expensiveComputation(number);
return (
<div>
<p>Expensive Computation Result: {result}</p>
</div>
);
}
export function ParentComponent() {
const [number, setNumber] = useState(1);
const [count, setCount] = useState(0);
const incrementCount = () => {
setCount(count + 1);
};
return (
<div>
<HeavyComponent number={number} />
<button onClick={incrementCount}>Increment Count: {count}</button>
</div>
);
}
```
Every time the `count` state changes, the `HeavyComponent` re-renders and calls the `expensiveComputation` function, which will significantly impact performance.
To prevent this chain of re-renders and optimize rendering, developers had to manually memoize these components. [Memoization](https://blog.logrocket.com/react-re-reselect-better-memoization-cache-management/) is an optimization technique first introduced in React 16, it involves caching the results of expensive function calls and reusing them when the same inputs occur again, preventing unnecessary re-renders.
React offers a couple of tools for memoization: [`React.memo`, `useMemo`](https://blog.logrocket.com/react-memo-vs-usememo/), and [`useCallback`](https://blog.logrocket.com/react-usememo-vs-usecallback/). These Hooks are typically used to wrap components and props to tell React that they don’t depend on the parent component. When the parent component re-renders, the wrapped component won't re-render.
Using this knowledge, we can optimize the previous example:
```javascript
import React, { useState, useMemo } from 'react';
// A function to simulate an expensive computation
function expensiveComputation(num) {
console.log('Running expensive computation...');
let result = 0;
for (let i = 0; i < 1000000000; i++) {
result += num * Math.random();
}
return result;
}
const HeavyComponent = React.memo(({ number }) => {
// Memoize the result of the expensive computation
const result = useMemo(() => expensiveComputation(number), [number]);
return (
<div>
<p>Expensive Computation Result: {result}</p>
</div>
);
});
export function ParentComponent() {
const [number, setNumber] = useState(1);
const [count, setCount] = useState(0);
const incrementCount = () => {
setCount(count + 1);
};
return (
<div>
<HeavyComponent number={number} />
<button onClick={incrementCount}>Increment Count: {count}</button>
</div>
);
}
```
Here, we use the `useMemo` Hook to memoize the result of the `expensiveComputation` function in the `HeavyComponent` component, so the computation is only recalculated when the `number` prop changes. We also use `React.memo` to wrap the `HeavyComponent` to prevent it from re-rendering unless its properties change.
These memoization Hooks are powerful and work well. However, [using them correctly is challenging](https://blog.logrocket.com/when-not-to-use-usememo-react-hook/). It's hard to know when and how to use them, leading developers to clutter their code with many functions and components wrapped with `useCallback` and `useMemo`, hoping to improve app speed.
This is where the React Compiler comes in.
## What is the React Compiler?
React Compiler, originally known as [React Forget](https://react.dev/blog/2022/06/15/react-labs-what-we-have-been-working-on-june-2022#react-compiler), was first introduced at React Conf in 2021\. It’s a low-level compiler that automatically grabs your application’s code, and converts it into code where components, their properties, and hook dependencies are automatically optimized.
Essentially, React Compiler performs tasks analogous to Hooks like `memo`, `useMemo`, and `useCallback` where necessary to minimize the cost of re-rendering.
The compiler has evolved since it was first introduced. Its recent architecture does more than just memoize components. It performs complex checks and optimizes advanced code patterns, such as local mutations and reference semantics. Meta apps, such as Instagram, have been using the compiler for some time.
### How React Compiler works
During the compilation process, the React Compiler refactors your code and uses a Hook called `_c`, formerly known as `useMemoCache`, to create an array of cacheable elements.
It does this by taking parts of each component and saving them to slots in the array. Then, it creates a memoization block, which is basically an `if` statement that checks if any of the elements in the array have changed the next time the component is invoked. If there aren’t any changes, it returns the cached (original) element.
For example, if we have a simple component like the following:
```javascript
function SimpleComponent({ message }) {
return (
<div>
<p>{message}</p>
</div>
);
}
```
The compiled output will look like this:
```javascript
function SimpleComponent(t0) {
const $ = _c(2);
const { message } = t0;
let t1;
if ($[0] !== message) {
t1 = (
<div>
<p>{message}</p>
</div>
);
$[0] = message;
$[1] = t1;
} else {
t1 = $[1];
}
return t1;
}
```
Let’s break down what’s happening in this code. First, the compiler uses the `_c` Hook to initialize an array with two slots to cache the component state, Then, it destructures the `message` prop from the `t0` prop object and creates an implicit `t1` variable that will hold the JSX element:
```javascript
const $ = _c(2);
const { message } = t0; // Extract message prop from props object (t0)
let t1; // Variable to hold the JSX element
```
Next, the compiler creates a memoization block to check if the `message` prop has changed before creating a new JSX, assigning it to the `t1` variable, and updating the array with the new `message`and JSX element. If the `message` hasn’t changed, it uses the cached JSX element:
```javascript
if ($[0] !== message) { // Checks if `message` prop has changed
t1 = ( // Creates JSX element if `message` changed
<div>
<p>{message}</p>
</div>
);
$[0] = message; // Update cache with new message
$[1] = t1; // Update cache with new JSX element
} else {
t1 = $[1]; // Reuse JSX element from cache if message hasn't changed
}
```
If you want to try out the React Compiler, you can use [the React Compiler playground](https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAgBMEAzAQygBsCSoA7OXASwjvwFkBPAQU0wAoAlPmAAdNvhgJcsNgB5CTAG4A+ABIJKlCPgDqOSoTkB6RaoDc4gL7iQVoA) or integrate it with your project. We’ll cover how to do so in the following sections. Additionally, if you’d like a deep dive and a more complex overview of the compiler, watch Sathya Gunasekaran’s "[React Compiler deep dive](https://www.youtube.com/watch?v=0ckOUBiuxVY&t=9309s&ab_channel=ReactConf)" talk at React Conf.
### Code compliance for optimization
To efficiently compile your code and optimize performance, the React Compiler has to understand your code. It does this with its knowledge of JavaScript and [the rules of React](https://react.dev/reference/rules/components-and-hooks-must-be-pure#components-and-hooks-must-be-idempotent). These are subtle guidelines designed to help developers write predictable and efficient React applications with fewer bugs.
Here are some of React’s rules:
* **Hooks should only be called at the top level**: Hooks must be called at the top level of a functional component or custom Hook. They should not be placed inside loops, conditions, or nested functions to ensure the Hooks are called in the same order each time a component renders
* **Only call Hooks from React code**: Hooks should be called only from functional components or custom Hooks. They should not be used in regular JavaScript functions, class components, or any non-React code
* **Side effects must run outside of** **the** **render** **phase**: Side effects, such as data fetching, subscriptions, or manual DOM manipulations, should not occur directly in the render phase. Instead, they should be managed using [the `useEffect` Hook](https://blog.logrocket.com/useeffect-react-hook-complete-guide/) or similar Hooks
* **Props and states are immutable**: Props and states should be treated as immutable. Instead of modifying them directly, use state setters (e.g., `setState` or similar functions provided by [Hooks like `useState`](https://blog.logrocket.com/guide-usestate-react/))
In the compilation process, if the compiler detects that these rules are being violated, it’ll automatically skip over the components or Hooks where such violations occur and safely move on to other code. Similarly, if your code is already well-optimized, you may not notice any major improvement with the compiler turned on.
Additionally, if the compiler finds that it can’t preserve the optimization in a memoized component, it’ll skip that component and instead let the manual memoization do its thing.
Take the following code, for example:
```javascript
import React, { useState, useMemo } from 'react';
function ItemList({ items }) {
const [filter, setFilter] = useState('');
const [sortOrder, setSortOrder] = useState('asc');
const filteredAndSortedItems = useMemo(() => {
const filteredItems = items.filter(item => item.includes(filter));
const sortedItems = filteredItems.sort((a, b) => {
if (sortOrder === 'asc') return a.localeCompare(b);
return b.localeCompare(a);
});
return sortedItems;
}, [filter, sortOrder, items]);
return (
<div>
<input
type="text"
value={filter}
onChange={e => setFilter(e.target.value)}
placeholder="Filter items"
/>
<select value={sortOrder} onChange={e => setSortOrder(e.target.value)}>
<option value="asc">Ascending</option>
<option value="desc">Descending</option>
</select>
<ul>
{filteredAndSortedItems.map((item, index) => (
<li key={index}>{item}</li>
))}
</ul>
</div>
);
}
export default ItemList;
```
The React Compiler will skip this code because the value being memoized might be mutated elsewhere in the component or application, which will invalidate the memoization, leading to bugs and performance issues.
If you try the code in the React Compiler playground, you’ll get the following error: 
## Trying out the React Compiler
The React Compiler is still experimental and isn't recommended for use in production. However, if you want to try it in smaller projects or integrate it with existing projects, this section will guide you on how to get started with the React Compiler.
Before installing the compiler, there are a few prerequisites that you might want to take care of. This includes checking your project’s compatibility with the compiler and installing an ESLint plugin for the compiler.
### Checking compatibility with the React Compiler
To check if your codebase will be compatible with the compiler, run the following command in your project’s directory:
```bash
npx react-compiler-healthcheck@latest
```
This command will check how many components can be optimized, whether [strict mode is enabled](https://blog.logrocket.com/using-strict-mode-react-18-guide-new-behaviors/), and if you have libraries that may be incompatible with the compiler installed. If your code complies with the rules, you'll get a response similar to the following: 
### Installing the ESLint plugin for the React Compiler
The React Compiler includes an ESLint plugin that helps ensure your code follows React rules and catches issues. The plugin is independent of the compiler, meaning you can use it without the compiler. Therefore, to use it, you need to install it:
```bash
npm install eslint-plugin-react-compiler
```
Then add it to your ESLint config:
```javascript
module.exports = {
plugins: [
'eslint-plugin-react-compiler',
],
rules: {
'react-compiler/react-compiler': "error",
},
}
```
### Using the React Compiler in existing projects
At the time of writing, the React Compiler is only compatible with React 19\. To use the compiler, you need to upgrade your project to the latest testing versions of React and React DOM. To do this, run the following command in your project’s directory:
```bash
npm install --save-exact react@rc react-dom@rc
```
You should see something like the following in your terminal window:  Next, install the Babel plugin for the compiler, `babel-plugin-react-compiler`. This plugin lets the compiler run in the build process:
```bash
npm install babel-plugin-react-compiler
```
After installing, add the following to the `plugins` array in your Babel config file:
```javascript
// babel.config.js
const ReactCompilerConfig = { /* ... */ };
module.exports = function () {
return {
plugins: [
['babel-plugin-react-compiler', ReactCompilerConfig], // must run first!
// ...
],
};
};
```
If you use Vite, add the plugin to the `vite.config` file instead:
```javascript
// vite.config.js
const ReactCompilerConfig = { /* ... */ };
export default defineConfig(() => {
return {
plugins: [
react({
babel: {
plugins: [
["babel-plugin-react-compiler", ReactCompilerConfig],
],
},
}),
],
// ...
};
});
```
Make sure the compiler runs first in the build pipeline. In other words, if you have other plugins, list them after the compiler. Also, add the `ReactCompilerConfig` object at the top level of the config file to avoid errors.
Now, if you start the development server and open up [React Developer Tools](https://blog.logrocket.com/debug-react-apps-react-devtools/) in the browser, you should see a `Memo ✨` badge displayed next to components that have been optimized by the compiler:  Et voilà! You’ve successfully integrated the React Compiler into your project.
### Using the React Compiler in new projects
The best way to get started with the React Compiler in a new project is to install the canary version of Next.js, which has everything set up for you. To start a project, use the following command:
```bash
npm install next@canary babel-plugin-react-compiler
```
Next, turn on the compiler using the `experimental` option in the `next.config.js` file:
```javascript
// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
experimental: {
reactCompiler: true,
},
};
module.exports = nextConfig;
```
The experimental option ensures that the compiler is supported in the following environments:
* App Router
* Page Router
* Webpack
## Scoped usage of the React Compiler
As we’ve discussed, the compiler is still experimental and isn’t recommended for use in production. This is due to JavaScript’s flexible nature, which makes it impossible for the compiler to catch every possible violation of the rules laid out and may lead to your code compiling with false negatives.
For this reason, you might want to limit the scope of the React Compiler to certain parts of your application rather than the entire project. This way, you can gradually adopt the compiler and experiment with its benefits without affecting your entire codebase.
There are two main ways to achieve this. Let’s take a quick look at both.
### Using the React Compiler in a specific directory
You can configure your build setup to use the React Compiler only for files in a specific directory. To do this, add the following code to the `ReactCompilerConfig` object in your Babel or Vite config file from earlier:
```javascript
const ReactCompilerConfig = {
sources: (filename) => {
return filename.indexOf('src/path/to/dir') !== -1;
},
};
```
Then replace `'src/path/to/dir'` with the path to the folder you want the compiler to operate on.
### Using the directive opt-in option
Alternatively, you can opt-in to the compiler on a per-file basis using a special directive comment at the top of your file. This method lets you enable the compiler for individual files without changing the overall setup. To do this, simply add the following to the `ReactCompilerConfig` object:
```javascript
const ReactCompilerConfig = {
compilationMode: "annotation",
};
```
Next, add the `"use memo"` directive to any component you want the compiler to optimize individually:
```javascript
// src/app.jsx
export default function App() {
"use memo";
// ...
}
```
## Conclusion
The React Compiler may not offer much as it is right now, especially when compared to other frameworks’ capabilities. And unfortunately, it currently doesn't have the potential to eliminate dependency arrays in Hooks like `useEffect` — an improvement that many developers eagerly anticipate.
Nonetheless, the compiler offers a glimpse into what could be possible in the near future. For instance, there’s potential for making dependency arrays in Hooks obsolete. This could simplify state management and side effects in React components, reducing boilerplate code and minimizing the risk of bugs related to incorrect dependencies.
In the meantime, experimenting with the React Compiler and contributing feedback will help shape its development as well. Happy hacking!
---
##Get set up with LogRocket's modern React error tracking in minutes:
1. Visit https://logrocket.com/signup/ to get an app ID.
2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side.
NPM:
```bash
$ npm i --save logrocket
// Code:
import LogRocket from 'logrocket';
LogRocket.init('app/id');
```
Script Tag:
```javascript
Add to your HTML:
<script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script>
<script>window.LogRocket && window.LogRocket.init('app/id');</script>
```
3.(Optional) Install plugins for deeper integrations with your stack:
* Redux middleware
* ngrx middleware
* Vuex plugin
[Get started now](https://lp.logrocket.com/blg/signup) | leemeganj |
1,915,856 | Dom Manipulate in angular | <h2 #hello1>Hello world</h2> <button (click)="handleFunction()">Change... | 0 | 2024-07-08T13:42:30 | https://dev.to/webfaisalbd/dom-manipulate-in-angular-52h9 | ```html
<h2 #hello1>Hello world</h2>
<button (click)="handleFunction()">Change Text</button>
```
```ts
@ViewChild('hello1', { static: true }) helloElement: ElementRef;
handleFunction() {
this.helloElement.nativeElement.textContent = 'Hello Angular!';
this.helloElement.nativeElement.style.color = 'red';
this.helloElement.nativeElement.style.backgroundColor = 'yellow';
this.helloElement.nativeElement.style.fontSize = '30px';
}
```
| webfaisalbd | |
1,915,857 | MongoDB | MongoDB শিখুন, বুক ফুলিয়ে বলুন "MongoDB জানি!" এখানে আপনার জন্য একটি চেকলিস্ট দিলাম যা আপনাকে একজন... | 0 | 2024-07-08T13:42:57 | https://dev.to/debos_das_9a77be9788e2d6e/mongodb-1ef9 | MongoDB শিখুন, বুক ফুলিয়ে বলুন "MongoDB জানি!" এখানে আপনার জন্য একটি চেকলিস্ট দিলাম যা আপনাকে একজন দক্ষ MongoDB ডেভেলপার হতে সাহায্য করবে।
📚 Meet MongoDB: MongoDB এর সাথে পরিচিত হন, এর বেসিক কনসেপ্ট এবং ডাটাবেজ হিসেবে এর বিশেষত্ব জানুন।
📘 Terminology and Data Model: MongoDB এর টার্মিনোলজি যেমন ডকুমেন্ট, কালেকশন, ডাটাবেজ এবং এর ডাটা মডেল সম্পর্কে বিস্তারিত ধারণা।
🔢 Data Types and Documents: MongoDB এর বিভিন্ন Data Types যেমন String, Number, Boolean, Array, Object ইত্যাদি এবং কিভাবে স্যাম্পল ডকুমেন্ট পড়তে ও বুঝতে হয়।
☁️ MongoDB Atlas: MongoDB এর ক্লাউড সার্ভিস প্ল্যাটফর্ম MongoDB Atlas এর ব্যবহার এবং সেটআপ।
🖥️ MongoDB Compass: MongoDB Compass এর ব্যবহার, এটি দিয়ে ডাটাবেজ ভিজ্যুয়ালাইজ করা ও ম্যানেজ করা।
🖊️ MongoDB VS Code Extension: VS Code এর MongoDB এক্সটেনশন ব্যবহার করে কিভাবে ডাটাবেজ ম্যানেজ ও কোড এডিট করা যায়।
💻 MongoDB Community Server: MongoDB Community Server সেটআপ এবং ব্যবহার।
🔍 Database-related methods:
-Insert One query
-Insert Many query
-Find query
-Projection concept
🔄 Query Operators:
-Comparison query operators
-Logical query operators
-Element query operators
-Evaluation query operators
📊 Other Operations:
-Sort
-Limit
-Distinct
-Row count
-Delete One and Many
📊 MongoDB Aggregation:
-Aggregation Sorting
-Limiting
-First and Last
-Match Condition
-Like
-Projection
🔢 Advanced Aggregation:
-Skip and Limit
-Group By
-Group By SUM, Avg, Max, Min
-Without Group By Sum Avg Max Min
-Group By Multiple
-Join By Lookup Operator
-Facet Operator
-Projection After Join
-Add New Field With Result
🔢 Aggregation Operators:
-Arithmetic Aggregation Operators
-String Aggregation Operators
-Date Aggregation Operators
-Comparison Aggregation Operators
-Boolean Aggregation Operators
-Conditional Aggregation Operators | debos_das_9a77be9788e2d6e | |
1,915,858 | Unlocking age verification—The fusion of ID document recognition and face attribute analysis | Age verification is more crucial than ever in today’s world, affecting a broad range of sectors like... | 0 | 2024-07-08T13:43:36 | https://dev.to/faceplugin/unlocking-age-verification-the-fusion-of-id-document-recognition-and-face-attribute-analysis-29l2 | programming, ai, machinelearning, datascience | Age verification is more crucial than ever in today’s world, affecting a broad range of sectors like social media and online gaming. Verifying your age is an essential step when purchasing age-restricted goods like alcohol and tobacco, joining an online gaming community, or even just creating a new social media account. However, why is age verification so important, and what changes are we seeing in this digital age?
The growth of e-commerce and digital services has made it easier for people to access a wide variety of goods and services from the comfort of their homes, but it has also made it easier for minors to circumvent age restrictions, raising serious legal and ethical concerns. Because of this, businesses need to ensure that age verification is not just compliant with the law but also protects youth from potentially harmful content and promotes a safer online environment for all.
Physically inspecting ID documents is a common step in traditional age verification procedures, which can be laborious and prone to human error. However, as technology develops, more precise and effective techniques become accessible.
For example, ID document recognition ensures that an individual is of legal age without requiring manual inspection by using advanced algorithms to scan and validate the information on an ID card or passport. This lowers the possibility of fraud while also expediting the procedure.
Face attribute analysis is another innovative technique that uses a person’s facial traits to estimate their age. This method provides a quick and non-intrusive means of age verification by using artificial intelligence to examine multiple attributes.
It’s especially helpful in situations where it might not be feasible to carry an ID, such as while registering for social media or playing online games.
We’ll go into more detail about these technologies’ functions, advantages, and things that companies need to remember to safeguard user privacy and maintain data security in this post. Businesses may more effectively traverse the digital landscape, maintaining their services safe and compliant while providing a seamless user experience, by recognizing the significance of age verification and the tools at their disposal.
Read full article here.
https://faceplugin.com/age-verification-id-document-and-face-attribute-analysis/
| faceplugin |
1,915,860 | Web scraping | Hello everyone ! I am new to Data Science and Dev. I would like to ask more about web scraping to... | 0 | 2024-07-08T13:45:59 | https://dev.to/last_chance/web-scraping-35k | python, webscraping, selenium, scrapy | Hello everyone ! I am new to Data Science and Dev. I would like to ask more about web scraping to someone who knows about it... if you are also exploring... that would be fine too | last_chance |
1,915,861 | Enhancing ID Verification Systems: Unleashing the Power of On Premise Face Recognition SDKs | Security and convenience are of the utmost importance in the current digital era, on premise face... | 0 | 2024-07-08T13:46:54 | https://dev.to/faceplugin/enhancing-id-verification-systems-unleashing-the-power-of-on-premise-face-recognition-sdks-of3 | programming, machinelearning, datascience, softwareengineering | Security and convenience are of the utmost importance in the current digital era, on premise face recognition SDKs are a vital tool for companies and organizations. These cutting-edge solutions simplify procedures and improve security measures while offering a flawless user experience. It is more important than ever to create a reliable ID verification system for everything from building access to online identity verification.
It is impossible to exaggerate the significance of ID verification systems. Making sure that only authorized people have access to sensitive information is essential as cyber dangers keep evolving and personal data becomes a top target for criminal activity.
Reliable and effective security is offered by on premise face recognition technology. Through the use of distinctive face traits, these systems authenticate users’ identities and drastically lower the possibility of identity theft and unwanted access.
Liveness detection, when combined with face recognition technology, is essential to guarantee that the image taken is of a real person and not a picture or a video. By adding a degree of security, fraudsters will find it extremely difficult to get around the system. To verify that the person being authenticated is, in fact, there and alive, liveness detection analyzes a variety of indicators, including texture, movement, and blinking.
These systems can be enhanced by ID document recognition SDKs, which provide the automated extraction and validation of data from ID documents. This technology streamlines and improves the speed and accuracy of the document verification process.
Organizations can build a complete and reliable verification system by combining these three elements: liveness detection, ID Document recognition, and on premise face recognition.
To sum up, the integration of liveness detection, ID document recognition, and on premise face recognition software development kits (SDKs) offer a potent remedy for contemporary security issues. Together these technologies provide rapid and reliable identity verification protecting companies and their users from potential dangers.
Understanding on premise face recognition SDKs
Explanation of on premise face recognition technology
Instead of depending on cloud-based service, on premise facial recognition technology installs and runs software locally within an organization’s IT infrastructure. With the help of sophisticated algorithms, this technology analyzes facial traits to produce individual digital templates. Against confirmed identities, these templates are then compared against templates that have been preserved.
A higher level of security and privacy is provided by a premise system, which operates solely within the local network and guarantees that sensitive biometric data never escapes the organization’s control.
Read full article here.
https://faceplugin.com/on-premise-face-recognition-sdk/ | faceplugin |
1,915,862 | Headless UI alternatives: Radix Primitives, React Aria, Ark UI | Written by Amazing Enyichi Agu✏️ Using React component libraries is a popular way to quickly build... | 0 | 2024-07-09T17:49:51 | https://blog.logrocket.com/headless-ui-alternatives-radix-primitives-react-aria-ark-ui | react, webdev | **Written by [Amazing Enyichi Agu](https://blog.logrocket.com/author/amazingenyichiagu/)✏️**
Using React component libraries is a popular way to quickly build React applications. Components from this type of library have many benefits. Firstly, they follow accessibility guidelines like [WAI-ARIA](https://www.w3.org/WAI/standards-guidelines/aria/), ensuring everyone will find them easy to use. Secondly, they come with styling and design so developers can focus on other aspects of their applications. Thirdly, many of them have pre-defined behaviors — for example, an autocomplete component filtering options based on the user’s input — that save time and effort compared to building from scratch.
Components from React component libraries are also optimized for performance. Because a large community or organization usually maintains them, this ensures regular updates and adherence to the most efficient coding practices. Some examples of these libraries include [Material UI](https://blog.logrocket.com/guide-material-design-react/), [Chakra UI](https://blog.logrocket.com/chakra-ui-adoption-guide/), and [React Bootstrap](https://www.youtube.com/watch?v=NlZUtfNVAkI).
However, there is limited room for customizing components from these libraries. You can usually make small changes to the components but can't change their underlying design system. A developer might want to use a component library because it handles accessibility and adds functionality to their app, but might also need those components to follow a custom design system.
Headless (unstyled) component libraries were designed to fill this gap. A headless component library is a UI library that offers fully functional components without styling. With headless components, it is up to the developer using them to style the components however they deem fit.
The most popular headless UI library at the time of this article is, of course, [Headless UI](https://headlessui.com/). While Headless UI bridges this design gap, this article will explain why Headless UI is not always the best choice by introducing three alternative libraries for unstyled components: Radix Primitives, React Aria, and Ark UI.
## Prerequisites
To follow along with this guide, you will need basic knowledge of HTML, CSS, JavaScript, and React.
## Why not just use Headless UI?
Headless UI is an unstyled React component library built by Tailwind Labs, the creators of Tailwind CSS. Headless UI’s website says the library is “designed to integrate beautifully with Tailwind CSS.” As mentioned earlier, Headless UI is the most popular in its category, with 25K stars on [GitHub](https://github.com/tailwindlabs/headlessui) and 1.35 million weekly downloads on npm.
However, Headless UI is limited in the number of unstyled components it offers — at the time of writing, it only offers 16 main components. Every other library covered in this article offers many more components to cover more use cases. Additionally, some of the libraries we’ll cover in the following sections offer helpful utility components and functions that Headless UI does not provide.
Let’s check out these alternatives!
## Radix Primitives
[Radix Primitives](https://www.radix-ui.com/primitives) is a library of unstyled React components built by the team behind [Radix UI](https://radix-ui.com/), a UI library with fully styled and customizable components. According to its website, the Node.js, Vercel, and Supabase teams all use Radix Primitives. The library has 14.8K stars on [GitHub](https://github.com/radix-ui/primitives).
You can [style the components from Radix Primitives](https://blog.logrocket.com/radix-ui-adoption-guide/#:~:text=you%20should%20know.-,Radix%20Primitives,-Radix%20Primitives%20is) using any styling solution you choose, including CSS, Tailwind CSS, or even CSS-in-JS. The components also support server-side rendering. More importantly, Radix Primitives has good documentation for each unstyled component it offers, explaining how to use them in projects.
### Installing and using Radix Primitives
The following are the steps to install and use Radix Primitives. This example imports a dialog box component from the library and styles it using vanilla CSS.
First, start a React Project using a framework of your choice, or open an existing React project.
Then, install the Radix Primitive component you need — the library publishes components as packages you can add to your application. For this example, install the `Dialog` component:
```bash
npm install @radix-ui/react-dialog
```
Next, create a file to import and customize the unstyled component for your application:
```javascript
// RadixDialog.jsx
import * as Dialog from '@radix-ui/react-dialog';
import './radix.style.css';
function RadixDialog() {
return (
<Dialog.Root>
<Dialog.Trigger className='btn primary-btn'>Radix Dialog</Dialog.Trigger>
<Dialog.Portal>
<Dialog.Overlay className='dialog-overlay' />
<Dialog.Content className='dialog-content'>
<Dialog.Title className='dialog-title'>Confirm Deletion</Dialog.Title>
<Dialog.Description className='dialog-body'>Are you sure you want to permanently delete this file?</Dialog.Description>
<div className='bottom-btns'>
<Dialog.Close className='btn'>Cancel</Dialog.Close>
<Dialog.Close className='btn red-btn'>Delete Forever</Dialog.Close>
</div>
</Dialog.Content>
</Dialog.Portal>
</Dialog.Root>
)
};
export default RadixDialog;
```
Next, let’s style the component:
```css
/* radix.style.css */
.btn {
padding: 0.5rem 1.2rem;
border-radius: 0.2rem;
border: none;
cursor: pointer;
}
.primary-btn {
background-color: #1e64e7;
color: white;
box-shadow: rgba(0, 0, 0, 0.2) 0px 2px 10px;
}
.red-btn {
background-color: #d32f2f;
color: #ffffff;
box-shadow: rgba(0, 0, 0, 0.2) 0px 2px 10px;
}
.dialog-overlay {
background-color: rgba(0, 0, 0, 0.4);
position: fixed;
inset: 0;
animation: overlayAnimation 200ms cubic-bezier(0.19, 1, 0.22, 1);
}
.dialog-content {
background-color: white;
position: fixed;
border-radius: 0.2rem;
top: 50%;
left: 50%;
translate: -50% -50%;
width: 90vw;
max-width: 450px;
padding: 2.5rem;
box-shadow: rgba(50, 50, 93, 0.25) 0px 2px 5px -1px, rgba(0, 0, 0, 0.3) 0px 1px 3px -1px;
}
.dialog-title {
font-size: 1.1rem;
padding-bottom: 0.5rem;
border-bottom: 3px solid #dfdddd;
margin-bottom: 1rem;
}
.dialog-body {
margin-bottom: 3rem;
}
.bottom-btns {
display: flex;
justify-content: flex-end;
}
.bottom-btns .btn:last-child {
display: inline-block;
margin-left: 1rem;
}
@keyframes overlayAnimation {
from {
opacity: 0;
}
to {
opacity: 1;
}
}
```
Finally, export and render the component in the DOM.
Here is the UI demo of the dialog component we styled above: 
### Radix Primitives pros and cons
Like every headless library this guide covers, Radix Primitives has many pros and cons. Some of its pros include:
* It offers 28 main components, which is many more than Headless UI offers
* Developers can install components individually, which means you can incrementally adopt Radix Primitives by installing only the necessary parts
* It offers a prop called `asChild`, which allows a developer to change the default DOM element of a Radix component, [a process that is known as Composition](https://www.radix-ui.com/primitives/docs/guides/composition)
Some cons to Radix Primitives include:
* It can be a a hassle to install all the necessary components individually through npm
* It takes time to get familiar with the anatomy of components from this library
## React Aria
[React Aria](https://react-spectrum.adobe.com/react-aria/index.html) is a library of unstyled components that Adobe released under their collection of React UI tools called [React Spectrum](https://github.com/adobe/react-spectrum). Adobe does not have a repository dedicated to React Aria, but the React Spectrum repository has 12K GitHub stars at the time of writing. Its npm package, react-aria-components, also currently receives 260K weekly downloads.
React Aria allows developers to style their components using any styling method. Developers can also install the components in this library individually using [React Aria hooks](https://react-spectrum.adobe.com/react-aria/hooks.html).
### Installing and using React Aria
We'll demonstrate how to create another dialog box, but this time we will use React Aria. This dialog box will use a similar styling to the Radix Primitives example.
First, start a new React app or open an existing project. Then, use your preferred package manager to install the component library with the command `npm install react-aria-components`.
Next, import the necessary unstyled components to create what you want. In this case, the example is building a dialog box:
```javascript
// AriaDialog.jsx
import { Button, Dialog, DialogTrigger, Heading, Modal, ModalOverlay } from 'react-aria-components';
import './aria.style.css'
function AriaDialog() {
return (
<DialogTrigger>
<Button className='btn primary-btn'>React Aria Dialog</Button>
<ModalOverlay isDismissable>
<Modal>
<Dialog>
{({ close }) => (
<>
<Heading slot='title'>Confirm Deletion</Heading>
<p className='dialog-body'>Are you sure you want to permanently delete this file?</p>
<div className='bottom-btns'>
<Button className='btn' onPress={close}>Cancel</Button>
<Button className='btn red-btn' onPress={close}>Delete Forever</Button>
</div>
</>
)}
</Dialog>
</Modal>
</ModalOverlay>
</DialogTrigger>
)
}
export default AriaDialog
```
Now, we’ll style the component. React Aria already has built-in classes you can use in CSS, including `.react-aria-Button`. You can also override the built-in classes with custom classes like the `.btn` class in this example:
```css
/* aria.style.css */
.btn {
padding: 0.5rem 1.2rem;
border-radius: 0.2rem;
border: none;
cursor: pointer;
}
.primary-btn {
background-color: #1e64e7;
color: white;
box-shadow: rgba(0, 0, 0, 0.2) 0px 2px 10px;
}
.red-btn {
background-color: #d32f2f;
color: #ffffff;
box-shadow: rgba(0, 0, 0, 0.2) 0px 2px 10px;
}
.react-aria-ModalOverlay {
background-color: rgba(0, 0, 0, 0.4);
position: fixed;
inset: 0;
animation: overlayAnimation 200ms cubic-bezier(0.19, 1, 0.22, 1);
display: flex;
justify-content: center;
align-items: center;
}
.react-aria-Dialog {
background-color: white;
border-radius: 0.2rem;
width: 90vw;
max-width: 450px;
padding: 2.5rem;
box-shadow: rgba(50, 50, 93, 0.25) 0px 2px 5px -1px, rgba(0, 0, 0, 0.3) 0px 1px 3px -1px;
outline: none;
}
.react-aria-Dialog .react-aria-Heading {
font-size: 1.1rem;
padding-bottom: 0.5rem;
border-bottom: 3px solid #dfdddd;
margin-bottom: 1rem;
}
.dialog-body {
margin-bottom: 3rem;
}
.bottom-btns {
display: flex;
justify-content: flex-end;
}
.bottom-btns .btn:last-child {
display: inline-block;
margin-left: 1rem;
}
@keyframes overlayAnimation {
from {
opacity: 0;
}
to {
opacity: 1;
}
}
```
Finally, export the component and render it in the DOM.
Here is the output of the dialog box in this example: 
### React Aria pros and cons
Some of the pros to using React Aria include:
* It offers hooks for individual components, which can be very useful for incremental adoption
* It offers 43 main components
* All its components have built-in classes. This is helpful when styling because you don’t need to create new classes in the markup
Here are some cons to using React Aria:
* Some components require a little more code setup to function properly. For example, in the dialog box, we had to destructure the `close` function, and then use it to close the box. This kind of functionality is built-in in a library like Radix
* To get a component to fully work as intended, you have to combine several other Ark UI components. For example, we had to combine `Button`, `Dialog`, `DialogTrigger`, `Heading`, `Modal`, and `ModalOverlay` just to get a dialog box to work. Some of the components do not work alone. This can be overwhelming at first and takes some time to get used to
## Ark UI
[Ark UI](https://ark-ui.com) is a library of unstyled components that work in React, Vue, and Solid. Chakra Systems — the team behind Chakra UI — is also the team behind Ark UI. At the time of this writing, Ark UI has 3.3K stars on [GitHub](https://github.com/chakra-ui/ark) and gets 38K weekly downloads on npm.
Similar to Radix Primitives and React Aria, with Ark UI, you can style the headless components with whichever method you prefer (CSS, Tailwind CSS, Panda CSS, Styled Components, etc.). Ark UI is also one of the few unstyled component libraries that support multiple frameworks.
### Installing and using Ark UI
Again, we will build another dialog box, this time with Ark UI and we will style it using vanilla CSS.
As always, create a new React project or open an existing one. Then, install the Ark UI package for React using `npm install @ark-ui/react`
Next, import and use the unstyled components from Ark UI. Here is the anatomy of a dialog box in Ark UI:
```javascript
// ArkDialog.jsx
import { Dialog, Portal } from '@ark-ui/react'
import './ark.style.css'
function ArkDialog() {
return (
<Dialog.Root>
<Dialog.Trigger className='btn primary-btn'>Ark UI Dialog</Dialog.Trigger>
<Portal>
<Dialog.Backdrop />
<Dialog.Positioner>
<Dialog.Content>
<Dialog.Title>Confirm Deletion</Dialog.Title>
<Dialog.Description>Are you sure you want to permanently delete this file?</Dialog.Description>
<div className='bottom-btns'>
<Dialog.CloseTrigger className='btn'>Cancel</Dialog.CloseTrigger>
<Dialog.CloseTrigger className='btn red-btn'>Delete Forever</Dialog.CloseTrigger>
</div>
</Dialog.Content>
</Dialog.Positioner>
</Portal>
</Dialog.Root>
)
}
export default ArkDialog
```
Now, you can style the component using any method of your choice:
```css
/* ark.style.css */
.btn {
padding: 0.5rem 1.2rem;
border-radius: 0.2rem;
border: none;
cursor: pointer;
}
.primary-btn {
background-color: #1e64e7;
color: white;
box-shadow: rgba(0, 0, 0, 0.2) 0px 2px 10px;
}
.red-btn {
background-color: #d32f2f;
color: #ffffff;
box-shadow: rgba(0, 0, 0, 0.2) 0px 2px 10px;
}
\[data-scope=dialog\][data-part=backdrop] {
background-color: rgba(0, 0, 0, 0.4);
position: fixed;
inset: 0;
animation: backdropAnimation 200ms cubic-bezier(0.19, 1, 0.22, 1);
}
\[data-scope=dialog\][data-part=positioner] {
position: fixed;
top: 50%;
left: 50%;
translate: -50% -50%;
width: 90vw;
max-width: 450px;
}
\[data-scope=dialog\][data-part=content] {
background-color: white;
padding: 2.5rem;
border-radius: 0.2rem;
box-shadow: rgba(50, 50, 93, 0.25) 0px 2px 5px -1px, rgba(0, 0, 0, 0.3) 0px 1px 3px -1px;
}
\[data-scope=dialog\][data-part=title] {
font-size: 1.1rem;
padding-bottom: 0.5rem;
border-bottom: 3px solid #dfdddd;
margin-bottom: 1rem;
}
\[data-scope=dialog\][data-part=description] {
margin-bottom: 3rem;
}
.bottom-btns {
display: flex;
justify-content: flex-end;
}
.bottom-btns .btn:last-child {
display: inline-block;
margin-left: 1rem;
}
@keyframes backdropAnimation {
from {
opacity: 0;
}
to {
opacity: 1;
}
}
```
Finally, export the new component and render it on your page. Below is the output of the code example: 
### Ark UI pros and cons
The following are some benefits of using Ark UI:
* It has 34 main components
* It has some useful components that are challenging to implement from scratch, including a carousel and circular progress bar, which other libraries do not have
* Similar to Radix Primitives, Ark UI supports [component composition](https://ark-ui.com/react/docs/guides/composition). It also does this using the `asChild` prop
A downside to using Ark UI is that it does not have built-in classes like React Aria. Instead, the recommended way to style components is to use built-in data attributes, which consist mostly of `data-scope` and `data-part`. Here is an example:
```css
\[data-scope=dialog\][data-part=positioner] {
position: fixed;
top: 50%;
left: 50%;
translate: -50% -50%;
width: 90vw;
max-width: 450px;
}
```
Using this styling method is not common and will take some time to get used to. However, a developer who is uncomfortable with this method can create custom classes for the components using `className`. These custom classes target the `data-part`, which the developer can easily style (without needing to bring in `data-scope`). Here is an example:
```css
.primary-btn {
background-color: #1e64e7;
color: white;
box-shadow: rgba(0, 0, 0, 0.2) 0px 2px 10px;
}
```
## Comparing the unstyled component libraries
Below is a table that compares the three unstyled component libraries discussed in this article:
| Libraries | Radix Primitives | React Aria | Ark UI |
| ----------- | --------------------- | ---------------------- | ---------- |
| Number of components | 28 | 43 | 34 |
| GitHub stars | 14.8K | 12K (React Spectrum) | 3.3K | | npm weekly downloads | Differs per component | 260K | 38K |
| Release year | 2020 | 2020 | 2023 |
| npm bundle sizes | Differs per component | 195.2KB | 217.6KB |
| Frameworks | React only | React only | React, Vue, and Solid |
## Conclusion
This guide discussed why developers should consider using unstyled component libraries besides Headless UI. We covered three libraries in detail, each with unique patterns that frontend developers must be aware of. But overall, they all serve the purpose of unstyled component libraries properly — Radix Primitives allows developers to install components individually, which is especially helpful if the developer needs just a few components, React Aria works well for any React project, and Ark UI can even be used on frameworks other than React.
There are other React unstyled component libraries this article did not discuss, such as [Base UI](https://mui.com/base-ui/getting-started/) (from the Material UI team), [Reach UI](https://reach.tech/) (from the React Router team), and [many more](https://github.com/jxom/awesome-react-headless-components). Undoubtedly, these libraries solve important problems for developers, and the trend of using them does not seem to be fading anytime soon.
---
##Get set up with LogRocket's modern React error tracking in minutes:
1. Visit https://logrocket.com/signup/ to get an app ID.
2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side.
NPM:
```bash
$ npm i --save logrocket
// Code:
import LogRocket from 'logrocket';
LogRocket.init('app/id');
```
Script Tag:
```javascript
Add to your HTML:
<script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script>
<script>window.LogRocket && window.LogRocket.init('app/id');</script>
```
3.(Optional) Install plugins for deeper integrations with your stack:
* Redux middleware
* ngrx middleware
* Vuex plugin
[Get started now](https://lp.logrocket.com/blg/signup)
| leemeganj |
1,915,864 | Preparing for the 100 Days of MERN Full Stack Challenge: The Goals, Plan, and Expectations | Hello developers 👋, I’m Naresh Kumar, a beginner in Full Stack Development from India. Tomorrow, I’m... | 0 | 2024-07-08T14:10:29 | https://dev.to/naresh_kmu_68/preparing-for-the-100-days-of-mern-full-stack-challenge-the-goals-plan-and-expectations-b2h | 100daysofcode, fullstack, webdev, programming |
**Hello developers 👋,**
I’m Naresh Kumar, a beginner in Full Stack Development from India. Tomorrow, I’m starting the #100DaysOfFullStackChallenge to improve my full-stack skills and become job-ready in the next 100 days 🚀.
**My Goals 🎯**
- Improve my coding skills 🧑💻 and create outstanding full-stack projects.
- Build and deploy at least three full-stack projects.
- Gain a solid understanding of the MERN stack (MongoDB, Express, React, Node.js).
**My Tech Stack 🛠️**
- Framework: React.js
- CSS Libraries: TailwindCSS & Bootstrap
- Database: MongoDB
- Dev Tools: Git, VS Code, GitHub
**My Plan 📅**
- I plan to become an outstanding full-stack developer in the next 100 days. Here’s a short overview of my journey:
**Days 1-5**
- Quick revision of HTML and CSS to ensure a solid foundation.
- Practice building a couple of responsive layouts.
**Days 6-15**
- Learn the basics of JavaScript.
- Practice with small projects or coding challenges to strengthen your understanding.
**Days 16-25**
- Learn the basics of React.js and build simple React components.
- Create a small project to apply what you've learned.
**Days 26-35**
- Explore the basics of backend development with Node.js and Express.js.
- Set up a simple server and understand routing.
**Days 36-45**
- Create basic and responsive frontend projects using React.js, HTML, and CSS.
- Integrate with your backend server to make it a full-stack project.
**Days 46-55**
- Learn about database management with MongoDB and connect it with Express.js.
- Perform CRUD operations and build a simple API.
**Days 56-65**
- Dive into state management in React with Redux.
- Implement state management in a project.
Days 66-75
- Start creating full-stack projects combining React.js, Node.js, Express.js, and MongoDB.
- Focus on at least one full project during this period.
**Days 76-85**
- Explore new technologies and tools in full-stack development (e.g., testing with Jest, version control with Git/GitHub).
- Implement these in your ongoing projects.
**Days 86-95**
- Learn about authentication and authorization technologies (e.g., JWT, OAuth).
- Add authentication to your projects.
**Days 96-100**
- Build a comprehensive full-stack project for practice purposes.
- Ensure it includes all the aspects you've learned (frontend, backend, database, state management, authentication).
| naresh_kmu_68 |
1,915,870 | PYTHON INSTALLATION (IDLE & COLLAB Execution Test) | https://www.python.org/downloads/ check the system type of yours. The latest series will not work on... | 0 | 2024-07-08T14:09:33 | https://dev.to/pradeepmtm/python-installation-idle-collab-execution-test-ilc | python, installation | https://www.python.org/downloads/
check the system type of yours.
The latest series will not work on win 7 or earlier













**IDLE, an integrated development environment (IDE) for Python, to write and execute Python scripts. **



**Choose File --> New File **

**Type the code & Save the file**




Same above can be achieved in as below as well

**Using Collab (alternate way)**
https://colab.research.google.com/


| pradeepmtm |
1,915,865 | Leveraging Perplexity AI for frontend development | Written by Peter Aideloje✏️ Perplexity AI has captured the public's attention and impressed tech... | 0 | 2024-07-10T15:18:46 | https://blog.logrocket.com/leveraging-perplexity-ai-frontend-development | frontend, webdev | **Written by [Peter Aideloje](https://blog.logrocket.com/author/peteraideloje/)✏️**
Perplexity AI has captured the public's attention and impressed tech giants like Amazon and Nvidia with its unique approach to AI. It isn't your typical search engine — it blends the clear answers of a chatbot with the detailed information and sources of traditional search. No more wading through endless links!
Developed by ex-Google and OpenAI minds, Perplexity AI aims to make knowledge accessible to everyone. It harnesses the power of cutting-edge AI language models like GPT-4 to deliver answers directly to your questions.
But Perplexity AI is more than just a search upgrade. It's constantly evolving to become a powerful tool that can transform how you approach development tasks, research information, and more. Let's explore everything Perplexity AI has to offer!
## Getting started with Perplexity AI
Your first step is creating your Perplexity AI account. Head over to [Perplexity AI's website](http://perplexity.ai) and click the **Sign Up** button:  You'll be presented with a few convenient options to create your account:  If you have a Google or Apple account, you can seamlessly connect it to Perplexity AI for a quick and secure signup. If you’d rather keep things separate, choose **Continue with Email** and follow the on-screen prompts to create your account using your email address.
## Navigating the Perplexity AI Interface
Once you're signed in, you'll be greeted by Perplexity AI's user-friendly interface:  Here's a quick rundown of the key areas to familiarize yourself with:
* **Search bar**: This is where the magic happens! Type in a query related to code, functionalities, debugging challenges, or anything that sparks your curiosity. Perplexity AI will leverage its advanced AI capabilities to deliver insightful and relevant results
* **Left side panel**: At the moment, this panel is pretty simple, allowing you navigate to Perplexity AI’s **Home**, **Discover**, and **Library** pages. As Perplexity AI continues evolving, this panel might offer additional functionalities. Watch this space for potential shortcuts, filters, or quick access to helpful resources
As you explore Perplexity AI, don't be afraid to experiment with different phrasings in the search bar. The more specific your questions are, the more precise and helpful Perplexity AI's responses will be.
## Leveraging Perplexity AI's search functionality
Perplexity AI goes beyond basic keyword searches. It offers powerful features to help you refine your research and delve deeper into topics. Let’s explore two key functionalities that will make your journey with Perplexity AI even more efficient.
### Using the Focus feature to perform focused searches
It would be frustrating to search for information on a specific topic but get bombarded with irrelevant results. Perplexity AI's **Focus** feature allows you to narrow down your search by specifying the source of the information.
Here's how it works:
* When writing your query in the search bar, you'll see a **Focus** option next to it
* If you click on **Focus**, a menu will appear with various sources you can choose from, such as academic journals, news articles, or specific websites
* Select the source that best aligns with your research needs
For example, if you're researching the latest advancements in AI, focusing on academic journals will deliver more in-depth and credible results compared to a general web search.
### Utilizing Threads for follow-up questions and exploration
Perplexity AI shines in its ability to mimic a natural conversation. This means you can ask follow-up questions in a **Thread** based on the initial response to explore deeper into a topic.
Here's how to leverage conversation flow:
* Once Perplexity AI answers your initial query, review the information and consider what aspects you want to explore further
* Type your follow-up question directly in the search bar. Perplexity AI will understand the context of your previous query and tailor its response accordingly
Asking follow-up questions lets you explore specific aspects of the topic, clarify doubts, and gain a more comprehensive understanding. Perplexity’s conversation flow makes it feel more like a knowledgeable assistant guiding you through your research journey.
You can access old **Threads** in your Perplexity **Library** using the left side panel.
## Exploring Perplexity Pro Search (formerly Copilot)
Basic search engines tend to throw endless links at you without necessarily understanding your search intent. Enter Perplexity Pro Search, your digital research buddy powered by cutting-edge AI like GPT-4 and Claude 3.
Unlike a traditional search, Perplexity Pro Search — formerly its Copilot feature — acts like a personal assistant, conversing with you to truly understand your needs. With Pro Search, you’ll get:
* Asked clarifying questions to pinpoint your exact needs instead of having to sift through irrelevant information
* Concise, informative answers, and all the sources are just a click away if you want to dig deeper
* Comprehensive searches through many sources, including academic papers, news articles, and forums, to ensure you have the full picture
Let’s see Pro Search in action. First, ask your question. Pro Search thrives on in-depth inquiries that would typically require extensive research:  You need to activate the Pro Search option on the right side of the search bar. Once activated, Pro Search might ask clarifying questions to ensure it understands your intent perfectly:  It then scours the web, searching many sources to deliver the most relevant, high-quality information:  Once Pro Search is finished, you should receive a clear, concise answer with no unnecessary fluff:  It may even give you a workable example to use:  If you want more information, you can easily access every source Pro Search used for further exploration:  Here's a cool feature: Pro Search doesn't stop at just answering your initial question. It understands that your research journey might have more layers. After providing your answer, Perplexity Copilot suggests additional related questions to explore alongside concise answers:  You can choose from Perplexity’s free and subscription Pro Search plans:
* **Free Plan:** Get five Pro Searches every four hours
* **Subscription Plan:** Get up to 600 Pro Searches per day
## Use cases for Perplexity Pro Search
You can apply Perplexity AI’s Pro Search Feature to countless research needs, beyond just basic keyword searches. In terms of frontend development, Pro Search can help you with:
* **Code suggestions**: Based on the context you provide to Pro Search, it can generate code snippets and recommendations to give you a jumping-off point for your project
* **Debugging**: If you’re having trouble with your code, you can use Pro Search to dive into potential issues and suggested fixes
* **Documenting work**: Documenting and commenting in your code can be important if you’re working on a larger team, but this task can also be time-consuming. Using Perplexity AI and its Pro Search feature can help save you time and effort
* **Finding resources**: If you have questions during your development process but you’re not sure where to get started, ask Pro Search and dig into the list of resources it provides
* **UI and component design ideas**: Get inspired by using Pro Search to generate example components and layouts for your frontend
Beyond these examples, Pro Search can be applied to countless research needs. Here's how it can transform your workflow across different fields:
* **Academic research**: If you’re struggling to navigate mountains of academic literature, Pro Search can cut through the noise by finding the most relevant sources for your topic. It can even summarize key points from research papers, saving countless hours of reading and analysis, and freeing you up to focus on deeper analysis and critical thinking
* **Professional research**: Research is part of almost every professional field. Lawyers can use Pro Search to easily pinpoint specific case laws, saving valuable time scouring legal databases. Marketers can gain valuable insights by analyzing trend reports from diverse sources. Developers facing coding challenges can find solutions and troubleshoot issues faster, keeping projects on track
* **Daily news digest**: On a more personal note, Pro Search can be an excellent personal news curator. It gathers news articles from various credible sources, providing a balanced, comprehensive, yet concise and easily digestible view of current events without information overload
No matter what your research needs are, Pro Search empowers you to conduct thorough research efficiently and effectively.
## Improving your workflow with Perplexity AI Collections
Perplexity AI isn't just about finding information – it's about keeping it organized and readily accessible. Here's where **Collections** come in, a powerful feature designed to streamline your workflow.
Imagine juggling multiple projects, each with its own research needs. Perplexity AI Collections allow you to group your findings based on specific projects or topics. This makes it easy to revisit relevant information later without starting your search from scratch.
Here's how to create a collection. Look for the **Collections** section in the **Library** within the Perplexity AI interface:  Click the **+** symbol to create a new collection. Give your collection a descriptive name that reflects its content: 
Now, whenever you find something relevant to your project or topic during your searches, simply click the **Add to Collection** button (it might have a specific icon depending on the interface) and choose the appropriate collection to store the information.
Within your Collections, you have the flexibility to customize how you see your saved content. For concise information — like key takeaways or research findings — you can display them as bullet points for easy scanning and reference:  If you’re working on a coding project, Perplexity AI recognizes code snippets and allows you to format them correctly within your Collections, ensuring proper readability and functionality when you revisit them later:  By leveraging collections and their customization options, you can create a personalized knowledge base tailored to your specific projects and workflow needs.
## Fine-tuning your results by adjusting Perplexity AI settings
Perplexity AI offers customization options that make your research journey truly personal. Let’s see how you can fine-tune your experience and unlock the platform's full potential.
### Create a tailored profile
Perplexity AI prioritizes personalization. You can create a profile and provide details about your interests, research areas, and preferred communication style to help influence the way Perplexity AI tailors its responses to you:
* Your search results could become even more relevant, reflecting your needs and research goals
* Perplexity AI might adjust its language style to match your preferences for formal or casual speech, making interactions more comfortable and engaging
### Control your data privacy
User privacy continues to be a hot topic on the web. Perplexity AI offers settings that allow you to control how your search data is used. For example:
* You can choose to save your search history for easier access to past inquiries or opt for incognito mode for complete privacy
* Manage how your anonymized search data is used to improve Perplexity AI's overall performance
Customizing these settings ensures that Perplexity AI delivers relevant results that also respect your privacy concerns. As Perplexity AI continues to evolve, these personalization features might expand, offering an even more tailored research experience in the future.
## Testing Perplexity AI by creating a website hero section
Perplexity AI can be your secret weapon throughout the frontend development process. Let's explore how Perplexity AI works with a specific example: implementing a dynamic hero section on your website.
Imagine that you're building a website with a hero section featuring a captivating background image and a clear call-to-action button. You have the basic HTML structure below:
```http
<section class="hero">
<div class="hero-image"></div> <div class="hero-content">
<h1>Welcome to Our Website!</h1>
<button>Learn More</button> </div>
</section>
```
However, you’re unsure how to implement the dynamic background image functionality using JavaScript. Let’s first set the context with Perplexity AI.
You’ll want to briefly explain that you're building a website with a hero section. Then, describe your goal: you want the hero section background image to change automatically based on a predefined set of images. Optionally, you can paste the hero section's relevant HTML structure:  You can see that Pro Search is already asking some follow-up questions to better understand the functionality you want to achieve. Perplexity will then walk you through each step you should follow:  In this case, it starts by recommending that you prepare the hero images you plan to use, then provides a basic JavaScript example:  It follows up with instructions regarding where to add the JavaScript function in your HTML file:  And finished with styling recommendations:  In this simple example, you can see how Perplexity AI delivers precise answers tailored to your app's requirements, swiftly solving your problem in seconds. You can also ask follow-up questions to refine your implementation. For example:
* Can you suggest a JavaScript function that allows me to switch between multiple background images at set intervals?
* How can I integrate a smooth transition effect between image changes?
There are plenty of ways to use Perplexity AI as a developer. Our collaborative coding example above showed how you can provide Perplexity AI with a query and code snippet to analyze. It then suggests potential code snippets to achieve the desired functionality.
You could also leverage **Collections** for the code snippets and suggestions that Perplexity AI provides. Simply use the **Add to Collection** feature described earlier to store helpful snippets. For example, you could create a "Dynamic Hero Section Code" collection to keep track of relevant JavaScript functions and logic.
For further testing and refining needs, Perplexity AI can help you troubleshoot any errors you encounter while implementing the code. You could ask it follow-up questions such as:
* I'm getting an error message about 'undefined' variables. Can you help me identify the issue?
* How can I test different transition effects for the background image changes?
## Benefits of using Perplexity AI
You’ve seen Perplexity’s benefits in action above, but let’s quickly summarize the pros of this tool:
* **Accelerated development**: Perplexity AI suggests relevant code snippets and approaches, saving you time spent searching for solutions online
* **Enhanced code quality**: By bouncing ideas off Perplexity AI and receiving feedback on potential errors, you can write cleaner and more efficient code
* **Experimentation and learning**: Perplexity AI encourages exploration by suggesting different functionalities and effects, allowing you to experiment and enhance your front-end development skills
Consider treating Perplexity AI as a coding guide, not a complete code generator. Use its suggestions as a foundation, adapt them to your specific needs, and remember to test your code thoroughly to ensure proper functionality.
## Quick comparison: ChatGPT vs. Perplexity AI
The world of AI assistants is booming, with new platforms emerging constantly. Two prominent players, Perplexity AI and ChatGPT, offer distinct strengths and cater to different user preferences. Let's compare them head-to-head to help you pick the perfect partner for your workflow.
### Core functionalities
Both Perplexity AI and ChatGPT offer a strong foundation for various tasks. They can answer your questions in a clear and informative way, generate creative text formats like poems or scripts, and even translate languages on the fly.
Additionally, both platforms boast free tiers with basic functionalities, allowing you to experiment before committing.
### Perplexity AI's strengths
Perplexity AI shines when it comes to in-depth research. It excels at real-time web searches, providing concise answers with source citations. This makes it an ideal companion for fact-checking, academic pursuits, or simply validating information you encounter online.
Since Perplexity AI prioritizes detailed explanations, it ensures that you grasp the full picture. Additionally, its **Collections** feature allows you to organize your research findings in a structured and accessible manner.
### ChatGPT's strengths
ChatGPT takes a more versatile approach. It's a true jack-of-all-trades, capable of handling content creation tasks like writing marketing copy or crafting engaging social media posts. It can even tackle mathematical calculations or offer basic coding assistance, making it a valuable tool for brainstorming sessions or exploring new ideas.
Although Perplexity is fairly user-friendly, ChatGPT's interface is a breeze to navigate even for beginners. Furthermore, it’s capable of handling complex and open-ended questions in a conversational way, so it’s ideal for creative exploration.
### Choosing between Perplexity AI and ChatGPT
Here's a quick breakdown of our comparison between ChatGPT and Perplexity AI to guide your decision:
<table>
<thead>
<tr>
<th>Feature</th>
<th>Perplexity AI</th>
<th>ChatGPT</th>
</tr>
</thead>
<tbody>
<tr>
<td>Focus</td>
<td>Research & fact-checking</td>
<td>Creative tasks & open-ended inquiries</td>
</tr>
<tr>
<td>Strengths</td>
<td>Detailed answers, source citations, collections feature</td>
<td>Versatility, user-friendly interface, open-ended questions</td>
</tr>
<tr>
<td>Ideal for</td>
<td>Researchers, students, fact-checkers</td>
<td>Content creators, writers, brainstorming sessions</td>
</tr>
</tbody>
</table>
Ultimately, as is typically the case for developer tools, the best choice depends on your specific needs and preferences.
If you prioritize in-depth research and fact-checking with clear sources, Perplexity AI is a powerful tool. But if you're looking for a versatile tool for creative tasks, open-ended inquiries, and a user-friendly experience, consider ChatGPT.
Both offer free tiers, so don't hesitate to experiment and see which platform fits best with your workflow.
## Final thoughts
In this comprehensive tutorial, we explored its user-friendly interface, powerful search features like Pro Search, and the ability to organize information through Collections. There are many ways you can harness Perplexity AI's power for research, development tasks, and more.
Now that you've seen Perplexity AI's potential, sign up for a free account and start exploring!
---
##Get set up with LogRocket's modern error tracking in minutes:
1. Visit https://logrocket.com/signup/ to get an app ID.
2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side.
NPM:
```bash
$ npm i --save logrocket
// Code:
import LogRocket from 'logrocket';
LogRocket.init('app/id');
```
Script Tag:
```javascript
Add to your HTML:
<script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script>
<script>window.LogRocket && window.LogRocket.init('app/id');</script>
```
3.(Optional) Install plugins for deeper integrations with your stack:
* Redux middleware
* ngrx middleware
* Vuex plugin
[Get started now](https://lp.logrocket.com/blg/signup) | leemeganj |
1,915,866 | Started New thing:) | I am happy to post my first post about the new learning day1 started downloaded and did the... | 0 | 2024-07-08T14:02:18 | https://dev.to/sandy74/started-new-thing-4lh6 | tutorial, beginners, hello | I am happy to post my first post about the new learning day1 started downloaded and did the print("Hello Galaxy"); | sandy74 |
1,915,867 | Router Link for single page to pass data for details page | <button [routerLink]="['/invoice/payment-history', data?.invoiceId]" ... | 0 | 2024-07-08T14:03:22 | https://dev.to/webfaisalbd/router-link-for-single-page-to-pass-data-for-details-page-1k96 | ```html
<button [routerLink]="['/invoice/payment-history', data?.invoiceId]" style="margin-right: 5px;" mat-mini-fab matTooltip="View"
color="accent">
<mat-icon>remove_red_eye</mat-icon>
</button>
```
---
## another way:
```html
<button (click)="handlePaymentHistory(data?.invoiceId)" style="margin-right: 5px;" mat-mini-fab matTooltip="View"
color="accent">
<mat-icon>remove_red_eye</mat-icon>
</button>
```
```ts
handlePaymentHistory(invoiceId: any) {
this.router.navigate(['/profile', invoiceId])
}
```
| webfaisalbd | |
1,915,868 | Ant Design adoption guide: Overview, examples, and alternatives | Written by Elijah Asaolu✏️ Ant Design prides itself as the second most popular React UI library,... | 0 | 2024-07-10T18:54:01 | https://blog.logrocket.com/ant-design-adoption-guide | react, webdev | **Written by [Elijah Asaolu](https://blog.logrocket.com/author/asaoluelijah/)✏️**
Ant Design prides itself as the second most popular React UI library, with over 90k stars on GitHub as of the time of writing. It's easy to see why, given its plethora of rich and easy-to-use UI components and other features that make building UIs enjoyable.
In this adoption guide, we'll explore the key features of Ant Design and how to get started using it in a React project, including setup and customization options. We'll also cover practical use cases and a comparison with popular alternatives like Material Design and Bootstrap.
## What is Ant Design?
Ant Design (Antd) was created by the Ant Group (Alibaba's parent company) and released publicly in July 2015\. Since then, the library has expanded to include support for modern UI components, extend its style and customization options, and even adapt to other JavaScript component libraries such as Vue, Angular, and Next.js.
Ant Design works similarly to other UI libraries. It provides high-quality components and design patterns out of the box. To use it in your project, you simply install it for your preferred framework, import the components you want to use, and further customize it to your needs via the component props or the Antd config file.
There are multiple big companies — including Alibaba.com, Baidu (the Chinese search engine), Tencent, and many more — already heavily using Ant Design as their primary UI library.
#### _Further reading:_
* [Introduction to Ant Design](https://blog.logrocket.com/introduction-to-ant-design/)
* [The top React UI libraries and kits in 2023 #AntDesign](https://blog.logrocket.com/top-react-ui-libraries-kits/#ant-design)
* [How to use Ant Design with Next.js](https://blog.logrocket.com/use-ant-design-next-js/)
* [How to use Ant Design with Vue 3](https://blog.logrocket.com/use-ant-design-vue-3/)
* [The best UI frameworks for Vue 3 #AntDesign](https://blog.logrocket.com/best-ui-frameworks-vue-3/#ant-design-vue)
## Why choose Ant Design?
Ant Design stands out for several key reasons:
* **Performance and ease of use**: This library provides optimized components that ensure fast and responsive applications. Its comprehensive documentation also makes it easy to spin things up quickly
* **TypeScript compatibility**: Fully compatible with TypeScript, ensuring type definitions and safety
* **Integrations**: Easily integrates with other React libraries and CSS frameworks like Tailwind CSS
* **Mobile UI library**: Antd offers a comprehensive mobile UI library with components for building intuitive mobile applications
* **Chart and animation libraries**: Built-in libraries for charts and animations enhance visual engagement
However, keep in mind that Ant Design components can be heavy. Additionally, projects with highly unique UI requirements might need extra effort for customization. Despite these cons, Ant Design's robust features and ease of use often outweigh its drawbacks, making it a strong choice for most projects.
## Getting started with Ant Design
To start using Ant Design in a React project, create a new React application by running the command below:
```bash
npm create vite@latest react-antd -- --template react
```
At this point, it’s a good time to install the default dependencies for your new React-Vite project. Next, add the `antd` library by running this command:
```bash
npm install antd
```
With this setup completed, you can now import and use Ant Design UI components in your React app, as shown below:
```javascript
// App.jsx
import "./App.css";
import { Alert } from "antd";
function App() {
return (
<>
<h1>React + Ant Design</h1>
<Alert
message="Congratulations!"
description="You've successfully installed ant design."
type="success"
showIcon
/>
</>
);
}
export default App;
```
The code above showcases an example of rendering a success alert with Ant Design. Once you run your app, you should get an output similar to the one below:  Ant Design comes with numerous components and features. Let's briefly examine the most frequently used ones in a typical application.
## Key Ant Design features to know
Ant Design components are organized into the following subgroups: general, layout, navigation, data entry, data display, feedback, and other miscellaneous components. We’ll start by looking at the general group of components.
### General components
The general group contains essential components such as buttons, icons, and typography.
Ant Design’s `<Button />` component has multiple variants, which you can easily style by updating the component type prop, extending it with an icon via the icon prop, or updating its loading state via the loading prop, as shown in the code sample below:
```javascript
import { SearchOutlined } from "@ant-design/icons";
import { Button, Flex } from "antd";
function Buttons() {
return (
<>
{/* <h1>React + Ant Design</h1> */}
<Flex gap="large">
<Button>Default Button</Button>
<Button type="primary">type="primary"</Button>
<Button type="dashed">type="dashed"</Button>
<Button type="link">type="link"</Button>
<Button type="primary" danger>
Danger Button
</Button>
<Button type="primary" icon={<SearchOutlined />}>
Button with Icon
</Button>
<Button type="primary" loading>
Loading Button
</Button>
</Flex>
</>
);
}
export default Buttons;
```
Running this code gives us the following output:  Ant Design icons come preinstalled with Ant Design, providing a wide range of icons that integrate seamlessly with Ant Design components:  However, you can also install these icons as a standalone package by running the command below:
```bash
npm install @ant-design/icons --save
```
This way, you are able to use the icons in projects that may not require the full Ant Design component library.
### Layout components
The layout group provides all the necessary components to create basic and complex layouts in your application, including grid and flex layouts. As shown below, you can easily create a flexbox layout with the `<Flex />` component and immediately pass in your preferred styling options as props:
```javascript
import { Button, Flex } from "antd";
const FlexBox = () => {
return (
<Flex justify="space-around" align="center">
{[1, 2, 3, 4].map((item) => (
<Button type="primary" key={item}>
Flex Item
</Button>
))}
</Flex>
);
};
export default FlexBox;
// Note: We've utilized mapping an array in this code sample to avoid repetition.
```
Running this code produces the following result:  Similarly, you can easily utilize the `<Row />` and `<Col />` components for grid layouts, as shown below:
```javascript
import { Col, Row } from "antd";
const style = {
background: "#0092ff",
padding: "20px",
};
const GridLayout = () => (
<>
<Row gutter={16}>
{[1, 2, 3, 4].map((item) => (
<Col className="gutter-row" span={6} key={item}>
<div style={style}>col-6</div>
</Col>
))}
</Row>
</>
);
export default GridLayout;
```
Here’s the result of the code above: 
### Data entry and display components
Ant Design provides multiple customizable components for data entry and data display. This results in beautifully styled and customizable form elements like inputs, color and file pickers, select components, rating components, switches, sliders, and more:
<span style="color: #ff0000;">****</span>
Also, Ant Design provides a `<Form />` component that makes it super easy to programmatically access form data and their states and design forms with multiple layouts.
The data display group, on the other hand, features everything from cards, carousels, accordions, tables, tabs, and popovers to even more unique components like QR codes, timelines, and more: 
#### _Further reading:_
* [Options for building React Native collapsible accordions #Ant Design accordion](https://blog.logrocket.com/building-react-native-collapsible-accordions/#accordion-ant-design)
### Feedback components
The feedback group contains all the components you need to render different message states to your users. These include alerts, drawer modals, popup confirmation modals, skeleton loaders, and more. Below you can see an example of these components: 
### Internationalization capabilities
Internationalization (i18n) is crucial for creating applications that can reach a global audience. Ant Design provides robust internationalization support for over 50 languages. This makes it easy to adapt your application to different locales.
For example, to change your app language to Spanish, you can import Ant Design's `ConfigProvider` along with the language you want to use and wrap your app around it, as shown below:
```javascript
// main.jsx
import ReactDOM from "react-dom/client";
import App from "./App.jsx";
import "./index.css";
import { ConfigProvider } from "antd";
import esES from "antd/locale/es_ES.js";
ReactDOM.createRoot(document.getElementById("root")).render(
<ConfigProvider locale={esES}>
<App />
</ConfigProvider>
);
```
Once you wrap your app in the ConfigProvider with the locale set to `esES`, any text in your components should now change to Spanish, providing a localized user experience.
## Customizing your Ant Design theme
Ant Design allows you to customize the theme of your application, tailoring the look and feel to match your design taste or project requirements. We can also easily do this with the `ConfigProvider`, as shown below:
```javascript
import { ConfigProvider } from "antd";
ReactDOM.createRoot(document.getElementById("root")).render(
<ConfigProvider
theme={{
token: {
colorPrimary: "#00ccff",
borderRadius: 3,
},
}}
>
<App />
</ConfigProvider>
);
```
With this change, your application's primary color should change to `#00ccff`, and the default `border-radius` should be `3px`.
## Use cases for Ant Design
Ant Design's feature set makes it suitable for various business applications. Its extensive component library supports enterprise applications, ecommerce platforms, and mobile apps.
Companies like Alibaba and Baidu leverage Ant Design for scalable internal tools and dashboards. At the same time, ecommerce platforms benefit from its customizable components for product listings, user accounts, and payment processing.
Ant Design's chart and animation libraries also make it ideal for data visualization tools, which are essential for businesses that need to analyze and present data effectively.
#### _Further reading:_
* [Data visualization with React and Ant Design](https://blog.logrocket.com/data-visualization-with-react-ant-design/)
## Ant Design vs. Material Design and Bootstrap
When people mention CSS libraries, names like Bootstrap and Material Design often come to mind first. However, Ant Design is just as effective, with its robust set of features and customization options.
While Bootstrap is renowned for its simplicity and ease of use and Material Design for its modern, mobile-first approach, Ant Design excels in providing comprehensive components suitable for complex, enterprise-level applications.
Each library has its unique strengths, making them suitable for different project needs. The table below highlights their major differences:
<table>
<thead>
<tr>
<th>Feature</th>
<th>Ant Design</th>
<th>Material Design</th>
<th>Bootstrap</th>
</tr>
</thead>
<tbody>
<tr>
<td>Performance</td>
<td>Optimized components, larger bundle size, requires tree-shaking</td>
<td>Lightweight, fast, mobile-optimized</td>
<td>Fast load times, responsive design, minimal configuration</td>
</tr>
<tr>
<td>Community</td>
<td>Large, active community, enterprise-level contributions</td>
<td>Vast community, strong Google support</td>
<td>Largest community, extensive third-party plugins and themes</td>
</tr>
<tr>
<td>Documentation/Resources</td>
<td>Detailed, well-organized, comprehensive examples, extensive API documentation</td>
<td>Extensive guidelines, numerous examples, detailed component API</td>
<td>Straightforward, easy-to-follow, numerous examples, extensive customization documentation</td>
</tr>
<tr>
<td>Component variety</td>
<td>Advanced data entry/display, charting, animation support</td>
<td>Standard UI components, rich animations</td>
<td>Basic UI components, utility classes, responsive design</td>
</tr>
<tr>
<td>Customization</td>
<td>Less variables, theme overrides, dynamic theming</td>
<td>CSS variables, theming, adaptable to brand guidelines</td>
<td>Sass variables, theming, wide range of pre-built themes</td>
</tr>
<tr>
<td>Internationalization</td>
<td>Built-in support, 50+ languages, easy locale switching</td>
<td>Supports internationalization, RTL support</td>
<td>Basic support, community-driven translations</td>
</tr>
</tbody>
</table>
This table should be a helpful resource as you choose a UI library for your next React project.
#### _Further reading:_
* [Comparing popular React component libraries](https://blog.logrocket.com/comparing-popular-react-component-libraries/#antdesign)
## Conclusion
In this adoption guide, we've explored the key features and advantages of Ant Design, demonstrated its practical use cases, and compared it with other popular UI libraries like Material Design and Bootstrap. We've covered how to get started with Ant Design, including setup and customization, and highlighted its performance, community support, and documentation resources.
You can also [explore all the code samples used in this article here](https://github.com/AsaoluElijah/antd-react). Thanks for reading!
---
##Get set up with LogRocket's modern error tracking in minutes:
1. Visit https://logrocket.com/signup/ to get an app ID.
2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side.
NPM:
```bash
$ npm i --save logrocket
// Code:
import LogRocket from 'logrocket';
LogRocket.init('app/id');
```
Script Tag:
```javascript
Add to your HTML:
<script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script>
<script>window.LogRocket && window.LogRocket.init('app/id');</script>
```
3.(Optional) Install plugins for deeper integrations with your stack:
* Redux middleware
* ngrx middleware
* Vuex plugin
[Get started now](https://lp.logrocket.com/blg/signup) | leemeganj |
1,915,869 | OKR Brainstorming Là Gì? Cách Tối Ưu Hiệu Quả Khi Thực Hiện OKR | Terus Digital Marketing trực thuộc Terus là đơn cung cấp đơn vị cung cấp giải pháp số toàn diện. Phục... | 0 | 2024-07-08T14:07:46 | https://dev.to/terus_digitalmarketing/okr-brainstorming-la-gi-cach-toi-uu-hieu-qua-khi-thuc-hien-okr-19m6 | Terus Digital Marketing trực thuộc Terus là đơn cung cấp đơn vị cung cấp giải pháp số toàn diện. Phục vụ chủ yếu mọi đối tượng kinh doanh tại HCM & toàn quốc. Với kinh nghiệm lĩnh vực [dịch vụ SEO Tổng Thể Website Nâng Cao Thứ Hạng, Tối Ưu Chi Phí](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/) trong đó rất nhiều dự án lớn nhỏ đã và đang thành công chúng tôi luôn hướng tới sự phát triển bền vững và mối quan hệ cộng tác lâu dài với khách hàng. Sau đây, Terus Digital Marketing sẽ giới thiệu cho bạn về OKR Brainstorming
OKR (Objectives and Key Results) brainstorming là một kỹ thuật quản lý mục tiêu giúp các tổ chức xây dựng và triển khai các mục tiêu chiến lược một cách hiệu quả. Phương pháp này bao gồm các bước nhằm tập hợp ý tưởng, sáng kiến từ các thành viên trong tổ chức để xác định được những mục tiêu và chỉ số đo lường (key results) phù hợp.
Quy trình OKR brainstorming giúp các tổ chức:
1. Xác định được những mục tiêu chiến lược quan trọng nhất cần tập trung vào.
2. Phát triển các chỉ số đo lường (key results) cụ thể, có thể đo đạc được để đạt được các mục tiêu đã định.
3. Tạo sự gắn kết và thống nhất giữa các thành viên trong tổ chức hướng tới các mục tiêu chung.
4. Tăng cường sự hợp tác, chia sẻ và trách nhiệm giữa các bộ phận.
5. Cung cấp cái nhìn toàn diện về tiến trình thực hiện mục tiêu của tổ chức.
OKR brainstorming đóng vai trò quan trọng trong việc xây dựng và triển khai hệ thống mục tiêu OKR hiệu quả, cụ thể:
1. Tạo sự nhất trí và cam kết của toàn bộ tổ chức: OKR brainstorming tập hợp ý kiến và sáng kiến từ các thành viên, giúp mọi người cùng thống nhất về những mục tiêu quan trọng nhất cần tập trung và cách thức đạt được các mục tiêu đó. Điều này tạo nên sự gắn kết, cam kết cao của toàn bộ tổ chức.
2. Xác định những mục tiêu và chỉ số đo lường phù hợp: Thông qua quá trình tập hợp ý tưởng, phân tích và lựa chọn, tổ chức có thể xác định được những mục tiêu chiến lược then chốt cần tập trung, đồng thời phát triển các chỉ số đo lường (key results) cụ thể và có thể đo đạc được.
3. Tăng cường sự hợp tác và chia sẻ trách nhiệm: OKR brainstorming tạo cơ hội cho các thành viên cùng thảo luận, chia sẻ và đóng góp ý kiến. Điều này thúc đẩy sự hợp tác giữa các bộ phận, tăng cường tinh thần trách nhiệm của mọi người trong việc đạt được mục tiêu chung.
4. Cung cấp cái nhìn toàn diện về tiến trình thực hiện: OKR brainstorming giúp tổ chức có cái nhìn toàn cảnh về các mục tiêu chiến lược, chỉ số đo lường và tiến trình thực hiện. Điều này tạo điều kiện để theo dõi, đánh giá và điều chỉnh kịp thời trong quá trình triển khai.
Nhìn chung, OKR brainstorming là một công cụ quản lý mục tiêu hiệu quả, góp phần đưa tổ chức đạt được những kết quả ấn tượng thông qua sự gắn kết và hợp tác của toàn bộ thành viên.
Tìm hiểu thêm về[ OKR Brainstorming Là Gì? Cách Tối Ưu Hiệu Quả Khi Thực Hiện OKR](https://terusvn.com/digital-marketing/okr-brainstorming-la-gi/)
Các dịch vụ khác tại Terus:
Digital Marketing:
* [Dịch vụ Chạy Facebook Ads Trọn Gói, Chuyên Nghiệp](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
* [Dịch vụ Chạy Google Ads Mở Rộng Tệp Khách Hàng](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
Thiết kế Website:
* [Dịch vụ Thiết kế Website Bán Hàng Tăng Doanh Thu](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_digitalmarketing | |
1,915,871 | Top Image Labeling Tools for Streamlined Digital Asset Management | Introduction In today's digital era, companies and institutions are producing and... | 0 | 2024-07-09T11:32:29 | https://dev.to/api4ai/top-image-labeling-tools-for-streamlined-digital-asset-management-1p7h | imagelabeling, api, ai, imageprocessing | #Introduction
In today's digital era, companies and institutions are producing and overseeing a staggering amount of digital content. A recent [IDC](https://www.idc.com/) report predicts that the global creation of digital data will hit 175 zettabytes by 2025, underscoring the rapid expansion and critical need for effective digital asset management (DAM).
Effective DAM is essential for any organization aiming to optimize its workflows, enhance collaboration, and boost overall productivity. Central to successful DAM is the capability to precisely and efficiently label images. Image labeling tools are indispensable for organizing and categorizing digital assets, simplifying their retrieval and use. Well-labeled images can dramatically cut down the time spent searching for particular assets, improve data analysis, and support better decision-making.
This blog will offer an in-depth exploration of the top image labeling tools for effective digital asset management. We will examine the primary features and advantages of each tool, providing insight into how they can fulfill your specific requirements. Additionally, we will cover key factors to consider when selecting an image labeling solution, such as accuracy, scalability, integration capabilities, ease of use, automation, security, and compliance.
By the conclusion of this blog, you will have a thorough understanding of the leading image labeling solutions on the market, along with practical advice on choosing and implementing the ideal option for your organization. Whether you are managing a small collection of digital assets or an extensive library, this guide will provide you with the knowledge to enhance your DAM processes and achieve greater efficiency.
#Understanding Image Labeling and Digital Asset Management
## Definition and Significance of Image Labeling
**What is Image Labeling?** Image labeling, also referred to as image annotation, involves assigning descriptive tags or metadata to images. This process can encompass identifying objects, people, scenes, and other relevant details within an image. Image labeling can be executed manually by individuals or automatically using machine learning and artificial intelligence (AI) technologies.
**Why is Image Labeling Crucial for Digital Asset Management?** Within the realm of digital asset management (DAM), image labeling is vital for organizing and managing extensive collections of digital images. As businesses amass thousands or even millions of digital assets, the ability to swiftly and accurately find specific images becomes increasingly challenging. Image labeling mitigates this problem by providing a systematic approach to cataloging and retrieving images based on their content. This not only boosts efficiency but also enhances the overall usability of digital asset libraries.
## Benefits of Image Labeling in Digital Asset Management
**Enhanced Searchability and Organization**
A major advantage of image labeling is the considerable enhancement in searchability and organization. By assigning pertinent labels to each image, users can effortlessly locate specific assets using keywords, categories, or other search parameters. This minimizes the time spent navigating through vast amounts of disorganized files and guarantees that valuable assets are readily accessible when required. For instance, a marketing team can efficiently find images related to a particular campaign or product, thereby streamlining their workflow and boosting productivity.
**Enhanced Collaboration and Workflow Efficiency**
Image labeling significantly enhances team collaboration and streamlines workflow processes. When digital assets are accurately labeled and systematically organized, team members can more effectively share and collaborate on projects. For example, designers, marketers, and content creators can readily access and utilize labeled images, ensuring uniformity and coherence in their work. Furthermore, automated image labeling tools can speed up the labeling process, allowing teams to concentrate on more strategic tasks rather than manual data entry.
**Better Data Analysis and Decision-Making**
Precise image labeling facilitates superior data analysis and more informed decision-making. Utilizing labeled images, organizations can uncover trends, patterns, and preferences, which can shape marketing strategies, product development, and customer engagement efforts. For instance, an e-commerce business can analyze labeled images to identify which product visuals yield the best performance, enabling them to optimize their visual content for higher conversion rates. Additionally, labeled data can train machine learning models, further boosting the capabilities of AI-driven analytics.
In summary, image labeling is a vital element of effective digital asset management. It not only enhances the searchability and organization of digital assets but also improves collaboration, workflow efficiency, and data-driven decision-making. As we explore the top image labeling solutions available, it's crucial to understand these core benefits and their potential to revolutionize how organizations handle their digital assets.
#Key Features of Effective Image Labeling Solutions
## Accuracy and Precision
**Significance of Precise Labeling**
Precise image labeling is essential for the success of any digital asset management system. Accurate labels enable users to efficiently search for and retrieve the exact images they require. Inaccurate labeling can result in wasted time, decreased productivity, and potential mistakes in projects that depend on specific visual content.
**Examples of How Precision Impacts Asset Management**
In an e-commerce environment, precise image labeling can be the difference between a customer finding the correct product or encountering irrelevant search results. In the media and entertainment industry, accurate labeling enables editors to quickly locate the right footage or stills, thereby streamlining the production process. In the healthcare sector, precise image labeling is critical, as mislabeled medical images can have serious and potentially harmful consequences.
## Scalability
**Managing Large Image Volumes**
Effective image labeling solutions must be capable of scaling to handle vast quantities of images. As organizations expand and accumulate more digital assets, their labeling system should manage this increasing load without sacrificing performance or accuracy.
**Adapting to Expanding Digital Asset Collections**
Scalability also means adapting to the growth and diversification of digital asset collections. Whether dealing with new types of images, various resolutions, or different file formats, a scalable solution should seamlessly accommodate these changes. For instance, a global retail brand might need to manage images from multiple product lines and regions, necessitating a labeling solution that can scale appropriately.
## Integration Capabilities
**Compatibility with Current DAM Systems**
Integration capabilities are crucial for ensuring an image labeling solution operates seamlessly with existing digital asset management systems. Smooth integration preserves a cohesive workflow, allowing labeled images to be easily accessed and managed within the overall DAM framework.
**API Support and External Integrations**
API support and third-party integrations allow organizations to expand the functionality of their image labeling solutions. For example, APIs can automate the import and export of images and labels between systems, while third-party integrations can enhance features such as advanced search capabilities or AI-driven analytics.
## User-Friendly Interface
**Ease of Use for Non-Technical Users**
A user-friendly interface is essential for the broad adoption and effective utilization of an image labeling solution. Non-technical users, such as marketing teams or content creators, should be able to navigate and use the system without requiring extensive training or technical assistance.
**Customization and Flexibility**
Customization options and flexibility enable users to adapt the labeling process to their specific requirements. This can include creating custom labels, setting up workflows that align with their operational processes, or modifying the interface to suit their preferences. For instance, a fashion retailer might need specific labels for seasonal collections, color schemes, and fabric types.
## Automation and AI
**Role of AI in Automating Image Labeling**
Automation and AI are crucial components of contemporary image labeling solutions. AI algorithms can automatically tag images based on set criteria, drastically reducing the time and effort needed for manual labeling. This is especially beneficial for large datasets where manual labeling would be impractical.
**Advantages of Machine Learning and Deep Learning Techniques**
Machine learning and deep learning techniques boost the accuracy and efficiency of image labeling. These technologies can learn from existing labeled datasets, improving their labeling precision over time. For example, a deep learning model trained on a diverse array of product images can automatically categorize new product photos with high accuracy, even as the product range grows.
## Security and Compliance
**Ensuring Data Privacy and Protection**
Security and compliance are vital aspects of any image labeling solution. Ensuring data privacy and protection requires implementing stringent security measures, such as encryption, secure access controls, and regular security audits. These practices are essential for safeguarding sensitive information and maintaining trust with stakeholders.
**Compliance with Industry Standards and Regulations**
Adhering to industry standards and regulations guarantees that the image labeling solution meets legal and ethical requirements. This may involve complying with data protection laws like GDPR or HIPAA, depending on the industry. For example, healthcare organizations must ensure their labeling solutions comply with HIPAA to protect patient information.
In summary, effective image labeling solutions should provide high accuracy, scalability, seamless integration, user-friendliness, automation, and robust security. By prioritizing these features, organizations can optimize their digital asset management processes and enhance the value of their digital assets.
#Leading Image Labeling Solutions for Digital Asset Management

### [Google Cloud Vision](https://cloud.google.com/vision?hl=en)
**Overview and Key Features**
Google Cloud Vision is a powerful image labeling tool that utilizes Google's cutting-edge AI and machine learning technologies. Key features include object detection, label detection, text recognition (OCR), facial detection, and explicit content detection. It supports both batch processing and real-time analysis via its API.
**Pros and Cons**
**Pros:**
- Exceptionally accurate and dependable labeling, driven by Google's AI.
- Extensive range of features catering to diverse image analysis requirements.
- Seamless integration with other Google Cloud services.
**Cons:**
- Can be costly for large-scale operations.
- Limited customization options for niche industry needs.
**Use Cases and Success Stories**
Google Cloud Vision is employed by companies like Ocado, a leading online grocery retailer, to enhance product recognition and improve customer experience. It is also used by various media organizations for content moderation and metadata generation.

###[Amazon Rekognition ](https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html)
**Overview and Key Features**
Amazon Rekognition provides a robust image and video analysis service with capabilities such as object and scene detection, facial analysis, celebrity recognition, and text detection. Its strong integration with AWS services makes it an adaptable choice for businesses already utilizing the AWS ecosystem.
**Pros and Cons**
**Pros:**
- Highly scalable, capable of managing large volumes of images.
- Excellent integration with AWS services for comprehensive solutions.
- Competitive pricing with a pay-as-you-go model.
**Cons:**
- Requires familiarity with AWS for optimal utilization.
- Some advanced features may necessitate additional configuration.
**Use Cases and Success Stories**
Amazon Rekognition is used by companies like C-SPAN to automate the labeling and analysis of video content, and by Pinterest to improve image search and discovery features on its platform.

### [Microsoft Azure AI Vision](https://azure.microsoft.com/en-us/products/ai-services/ai-vision)
**Overview and Key Features**
Microsoft Azure AI Vision provides a comprehensive suite of image processing tools, including object detection, OCR, and spatial analysis. Designed for high integration with other Azure services, these tools can be easily deployed across various environments, from cloud to edge computing.
**Pros and Cons**
**Pros:**
- High accuracy with continuous enhancements from Microsoft’s AI research.
- Seamless integration with Azure’s extensive range of services.
- Excellent support and thorough documentation.
**Cons:**
- Can be expensive for smaller businesses.
- Initial setup and integration may be complex.
**Use Cases and Success Stories**
Microsoft Azure AI Vision is used by companies like Uber to enhance driver identification processes and by Walgreens to improve in-store customer experiences through advanced visual analytics.)

### [Clarifai](https://www.clarifai.com/solutions/digital-asset-management)
**Overview and Key Features**
Clarifai is a prominent AI company specializing in image and video recognition. Its platform provides an extensive range of features, including custom model training, automated image tagging, and content moderation. Clarifai’s solutions are highly customizable, making them adaptable for various industries.
**Pros and Cons**
**Pros:**
- Highly customizable models designed to meet specific business requirements.
- User-friendly interface complemented by comprehensive documentation.
- Strong performance in both image and video analysis.
**Cons:**
- Steeper learning curve for custom model training.
- Pricing can be high for extensive customizations.
**Use Cases and Success Stories**
Clarifai is utilized by companies like OpenTable to improve food photo recognition and by Vevo to automate video content tagging and moderation, enhancing user engagement and ensuring compliance.

### [API4AI Image Labeling API](https://api4.ai/apis/image-labelling)
**Overview and Key Features**
API4AI offers a versatile image labeling API that performs image classification and provides labels for recognized objects. It supports an extensive label map, covering various themes from household tools to a wide variety of animals. Designed for easy integration into existing systems, it offers flexibility for developers with comprehensive API documentation.
**Pros and Cons**
**Pros:**
- Easy integration with existing applications.
- Supports a wide array of labels with high accuracy.
- Flexible pricing plans suitable for different business sizes.
**Cons:**
- Less well-known compared to larger providers.
- May require additional development effort for extensive customizations.
**Use Cases and Success Stories**
API4AI has been effectively utilized by startups and SMEs to enhance their product categorization and visual search capabilities. A notable example includes a fashion retailer using the API to automate product tagging, improving inventory management and search functionality.

### [Imagga](https://imagga.com/solutions/auto-tagging)
**Overview and Key Features**
Imagga is a versatile image recognition platform providing features such as automatic tagging, color extraction, and categorization. Renowned for its user-friendly interface and flexibility, it is a favored choice for businesses seeking to quickly integrate image recognition functionalities.
**Pros and Cons**
**Pros:**
- Easy to implement with a user-friendly API.
- Offers a wide array of features, including custom tagging.
- Competitive pricing with scalable options.
**Cons:**
- Limited advanced AI capabilities compared to other providers.
- May not be ideal for handling large volumes of data.
**Use Cases and Success Stories**
Imagga is utilized by companies like Smartphoto to automate photo tagging and categorization, significantly enhancing customer experience and operational efficiency. Another example is Bynder, a digital asset management platform that leverages Imagga for automated metadata generation and asset organization.
These leading image labeling solutions provide a variety of features and advantages designed to meet diverse business requirements. By comprehending their main attributes, strengths and weaknesses, and practical applications, organizations can make well-informed decisions to improve their digital asset management workflows.
# How to Select the Ideal Image Labeling Solution for Your Requirements
##Evaluating Your Digital Asset Management Needs
**Identifying Specific Requirements and Objectives**
Before selecting an image labeling solution, it is essential to pinpoint your specific requirements and objectives. Consider what you need the labeling solution to achieve. Are you aiming to enhance searchability within your digital asset management (DAM) system? Do you need to automate the tagging process for a large volume of images? Clearly defining your objectives will help you narrow down your options and concentrate on solutions that align with your needs.
**Evaluating Current DAM Capabilities and Identifying Gaps**
**Assessing Existing DAM Functionality and Identifying Inefficiencies**
The next step is to evaluate your current digital asset management
(DAM) capabilities and identify any gaps. Consider the following:
- What functionalities does your existing DAM system offer?
- Are there any bottlenecks or inefficiencies in your current workflow?
- What types of digital assets do you manage, and what is the volume?
Understanding your current system’s strengths and weaknesses will help you choose an image labeling solution that complements and enhances your existing setup.
## Comparing Features and Capabilities
**Aligning Solution Features with Your Requirements**
After identifying your needs, compare the features and capabilities of various image labeling solutions. Look for functionalities that align with your requirements, such as object detection, text recognition, facial recognition, and custom labeling. Ensure the solution can handle the types of images and metadata your organization manages.
**Planning for Future Scalability and Expansion**
It’s also crucial to consider future scalability and growth. Select a solution capable of managing an increasing volume of digital assets and adapting to evolving business requirements. For instance, if you anticipate expanding your digital asset library or incorporating new types of assets, make sure the solution can scale accordingly without sacrificing performance.
## Budget Considerations
**Understanding Pricing Models and Expenses**
Image labeling solutions come with different pricing structures, including pay-as-you-go, subscription-based, or enterprise licensing. It’s crucial to understand these models to avoid unforeseen expenses. Calculate the total cost of ownership, considering initial setup fees, ongoing subscription costs, and potential charges for additional features or usage beyond initial limits.
**Balancing Cost with Value and Benefits**
While cost is a significant factor, it shouldn’t be the only consideration. Weigh the cost against the value and benefits the solution offers. A higher-priced option might provide advanced features, greater accuracy, and superior support, making it a worthwhile investment. Consider the long-term return on investment by assessing how the solution will enhance efficiency, reduce manual labor, and improve overall asset management.
## Trial and Evaluation
**Importance of Testing Solutions Prior to Commitment**
Before finalizing your choice, it's essential to test the solutions you are considering. Most providers offer trial periods or demo versions, allowing you to evaluate their performance within your specific environment. Utilize this opportunity to determine how well the solution integrates with your existing systems and workflows.
**Key Metrics and Criteria for Assessment**
During the trial period, evaluate the solution based on critical metrics and criteria such as:
- Accuracy and reliability of labeling.
- Ease of use and user interface.
- Integration capabilities with your DAM system.
- Scalability and performance under load.
- Quality of support and documentation.
#Conclusion
### Summary of Key Points
Efficient image labeling is crucial for effective digital asset management, offering substantial benefits such as improved searchability, enhanced collaboration, and better data analysis. As the volume of digital assets grows, choosing the right image labeling solution is increasingly important for organizations aiming to streamline workflows and optimize asset management processes.
In this blog, we covered several leading image labeling solutions:
- [Google Cloud Vision](https://cloud.google.com/vision?hl=en): Renowned for its accuracy and comprehensive features, ideal for businesses with diverse image analysis needs.
- [Amazon Rekognition](https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html): Provides scalability and strong integration with AWS services, suitable for organizations seeking a cost-effective, scalable solution.
- [Microsoft Azure AI Vision](https://azure.microsoft.com/en-us/products/ai-services/ai-vision): Offers high accuracy and seamless integration with Azure services, perfect for businesses already within the Microsoft ecosystem.
- [Clarifai](https://www.clarifai.com/solutions/digital-asset-management): Highly customizable and user-friendly, making it a great fit for businesses needing tailored solutions.
- [API4AI Image Labeling API](https://api4.ai/apis/image-labelling): Versatile and flexible, suitable for startups and SMEs looking for easy integration and reliable performance.
- [Imagga](https://imagga.com/solutions/auto-tagging): Provides a wide range of features with competitive pricing, ideal for companies seeking a straightforward yet effective solution.
### Final Thoughts
Selecting the appropriate image labeling solution necessitates a thorough assessment of your unique needs, existing DAM capabilities, and future growth potential. It's crucial to evaluate each option based on features, scalability, integration abilities, user-friendliness, automation, and security to ensure it aligns with your business objectives.
The advantages of choosing the right image labeling solution are numerous. It can greatly enhance your digital asset management by increasing efficiency, reducing manual workload, and facilitating better decision-making through precise and organized data. Additionally, investing in a dependable and scalable solution ensures your organization is well-equipped to manage the increasing volume and complexity of digital assets.
Take the time to comprehensively assess your requirements, test potential solutions, and learn from others' experiences. By doing so, you'll be prepared to select the image labeling solution that best meets your organization’s needs, ultimately driving greater efficiency and value from your digital assets.
[More stories about Web, Cloud, AI and APIs for Image Processing](https://api4.ai/blog) | taranamurtuzova |
1,915,872 | Caracteres de entrada do teclado | A maioria dos programas e applets Java do mundo real é gráfica e baseada em janelas e não baseada em... | 0 | 2024-07-09T22:02:03 | https://dev.to/devsjavagirls/caracteres-de-entrada-do-teclado-35n6 | java | - A maioria dos programas e applets Java do mundo real é gráfica e baseada em janelas e não baseada em console.
- Há, porém, um tipo de entrada de console que é relativamente fácil de usar: a leitura de um caractere a partir do teclado.
- Para ler um caractere a partir do teclado usaremos System.in.read().
- O método read() espera até o usuário pressionar uma tecla e então retorna o resultado.
- O caractere é retornado como um inteiro, logo, deve ser convertido para um char para ser atribuído a uma variável char.
- Por padrão, a entrada de console usa um buffer de linha, uma pequena parte da memória que é usada para armazenar os caracteres antes de serem lidos pelo programa.
- Você deve pressionar ENTER para enviar qualquer caractere digitado para o programa.
- Exemplo:

Observe que main( ) começa assim:

- Já que System.in.read() está sendo usado, o programa deve especificar a cláusula throws java.io.IOException para tratar erros de entrada.
- Quando pressionamos ENTER, uma sequência entrada de linha é inserida no fluxo de entrada.
- Esses caracteres ficam pendentes no buffer de entrada até serem lidos.
- Em alguns aplicativos, podemos ter de removê-los (lendo-os) antes da próxima operação de entrada (será visto posteriormente).
| devsjavagirls |
1,915,873 | API Testing Using RestAssured And Testkube | Modern applications adopt API-first design, which provides seamless communication between services... | 0 | 2024-07-08T14:10:15 | https://testkube.io/learn/api-testing-using-restassured-and-testkube | testing, restassured, kubernetes | Modern applications adopt API-first design, which provides seamless communication between services and clients. This approach is becoming increasingly popular, where APIs are designed and developed before the implementation of the actual services. This ensures that APIs are treated as first-class citizens, fostering consistency, reusability, and scalability.
Ensuring the reliability and availability of these APIs is crucial, as they are the backbone of critical business processes. API testing helps validate these services and ensure they work as expected and meet the set standards.
Considering the Java ecosystem, one popular tool for testing APIs is RestAssured. Tailored for the Java ecosystem, RestAssure is a popular tool for API testing. However, managing and executing these tests is complicated as we move towards containerized applications.
In this post, we'll examine these challenges and learn how to use RestAssured with Testkube for API testing.
## RestAssured
RestAssured is a Java tool for testing APIs using Java libraries. It integrates well with build tools like Maven and Gradle, making it easy to work with. It enables automation testing of APIs by allowing developers to send simple HTTP requests and focus solely on testing the API without worrying about anything else.
Below are some noteworthy features of RestAssured:
- RestAssured integrates easily with other testing frameworks, such as JUnit and TestNG, making it easier to include API tests along with other automated tests.
- It supports all HTTP methods, enabling you to perform complete testing of CRUD-based functionalities.
- It has out-of-the-box support for XML and JSON. This means you can use features like XMLPath and JSONPath to extract variables from responses.
- There is a robust set of assertion methods to validate the responses.
You can read more about RestAssured [here](https://rest-assured.io/).
### Challenges using RestAssured in Kubernetes
RestAssured is a powerful tool for testing APIs in CI pipelines and local environments. However, running these tests in a Kubernetes environment has different challenges.
- Setting up environments for tests with required dependencies and configurations can be complex.
- Managing and scaling tests based on the load in the Kubernetes cluster can be challenging.
- Avoiding interference, maintaining consistency, and running tests in isolation is problematic.
- Integrating with CI/CD tools and maintaining test artifacts and logs can be a painful process.
Let us see how Testkube helps overcome these challenges.
## Using RestAssured With Testkube
Testkube is a Kubernetes-native testing framework that allows you to create testing workflows in a declarative, version-controlled way. It integrates with most of the CI/CD tools and helps you automate the initiation, execution, and validation of tests as part of automated workflows.
Testkube allows you to plug in any testing tool and leverage the power of Kubernetes. It converts your tests, test suites, and other artifacts into Kubernetes CRDs, allowing you to manage them declaratively.
With Testkube, you can create Test Workflows that include everything from provisioning necessary infrastructure components to integrating seamlessly with other testing tools and orchestrating complex tests. Refer to our [Test Workflows documentation](https://docs.testkube.io/concepts/test-workflows/) to learn more.
Let's examine how to use RestAssured with Testkube for API testing. We'll create a Test workflow using Gradle and integrate RestAssure tests into it. This [repo](https://github.com/kubeshop/testkube-examples/tree/main/RestAssured%20Test%20Using%20Gradle) contains all the required files for this example.
### Pre-requisites
- Get a [Testkube account](https://testkube.io/get-started).
- Kubernetes cluster - we're using a local Minikube cluster.
- [Testkube Agent](http://docs.testkube.io/testkube-cloud/articles/installing-agent) configured on the cluster.
Once the prerequisites are in place, you should have a target Kubernetes cluster ready with a Testkube agent configured.
### Creating a Test Workflow
Navigate to the Test Workflows tab and click on "Add a new test workflow"
This will provide you with three options:
- Create from scratch - _use the wizard to create a Test Workflow._
- Start from an example - _use existing k6, cypress, and playwright examples_
- Import from yaml - _import your own Test Workflow._
We'll choose the "create from scratch" option to create this workflow.
- Provide a name for the workflow and choose the type as Gradle.
- Provide the run command. In this case, we'll provide `gradlew test`
- Provide a Gradle version, we'll use `8.5.0-jdk11`

On the next screen, provide the source for the test file. This can either be a Git Repo, String or a file. In this case, we'll use a Git repo.

On the next screen, it will generate the yaml spec file and display the output.

```yaml
kind: TestWorkflow
apiVersion: testworkflows.testkube.io/v1
metadata:
name: gradle-restassured
namespace: testkube
labels:
test-workflow-templates: "yes"
spec:
use:
- name: official--gradle--beta
config:
run: ./gradlew test
version: 8.5.0-jdk11
content:
git:
uri: https://github.com/kubeshop/testkube-examples.git
revision: main
paths:
- RestAssured Test Using Gradle
container:
workingDir: /data/repo/RestAssured Test Using Gradle
steps:
- artifacts:
workingDir: /data/repo/RestAssured Test Using Gradle/app/build/
paths:
- '**/*'
```
The yaml file is self-explanatory as it lists down the details you've provided in the yaml. We have added extra parameters for artifacts that will collect the reports generated by RestAssured and store them.
Below is the RestAssured test file that explains what we are testing.
```java
package org.example;
import io.restassured.RestAssured;
import io.restassured.response.Response;
import org.junit.jupiter.api.Test;
import static io.restassured.RestAssured.*;
import static org.hamcrest.Matchers.*;
public class ApiTest {
@Test
public void testGetEndpoint() {
RestAssured.baseURI = "https://jsonplaceholder.typicode.com";
given().
when().
get("/posts/1").
then().
statusCode(200).
body("userId", equalTo(1)).
body("id", equalTo(1)).
body("title", not(empty())).
body("body", not(empty()));
}
}
```
The test checks for the response from https://jsonplaceholder.typicode.com endpoint and validates if the response.
The repo contains other files, including test steps and a test runner, which contain related code for executing the RestAssured test using Gradle.
Click on "Create" to create the test workflow.
### Executing the Test Workflow
Once the workflow is ready, you'll see the newly created test workflow on the screen. Click on it and click "Run Now" to start the workflow.
You'll see the workflow executing along with the real-time logs of every step.

You'll see the test result based on the test execution. In this case, you'll see the test pass.

Since we have configured the artifacts for this, you can navigate to the artifacts tab and look at the reports generated by RestAssured. Testkube saves these reports for every execution making it easier to analyze the tests.


This was a simple demo of creating a RestAssured Test Workflow using Gradle for Kubernetes testing. To take advantage of test workflows more, you can create custom workflows and import them to Testkube.
## Summary
In this post we looked at how API first design approach is getting popular. Many Java applications are also adopting this design principle, and tools like RestAssured are gaining popularity in testing APIs. We also looked at RestAssured and the complexities of running it on Kubernetes.
We then saw how using RestAssured with Testkube, you can create an end-to-end workflow for API testing leveraging the power of Kubernetes.
Visit the [Testkube website](https://testkube.io) to learn more about the other testing tools you can integrate with. If you struggle with anything, feel free to post a note in our active [Slack community](https://testkube.io/slack). | michael20003 |
1,915,874 | Transforming Workloads Seamlessly with Raven: The Automated Workload Conversion Tool | In today's rapidly evolving technological landscape, businesses are constantly seeking efficient ways... | 0 | 2024-07-08T14:10:32 | https://dev.to/onixcloud/transforming-workloads-seamlessly-with-raven-the-automated-workload-conversion-tool-3ena | data, automation, cloud, migration | In today's rapidly evolving technological landscape, businesses are constantly seeking efficient ways to optimize their operations. One significant challenge many enterprises face is the seamless migration of data and workloads across different platforms and systems. This is where [Raven Migration](https://www.onixnet.com/raven/), the automated workload conversion and translation tool, steps in as a game-changer.
**Raven: The Data Transformation Tool**
Raven is designed to simplify the complex process of migrating and transforming workloads between various environments. Whether moving data to the cloud, transitioning between databases, or upgrading legacy systems, Raven offers a robust solution. Its automated capabilities streamline what would otherwise be a time-consuming and error-prone task, ensuring minimal disruption to business continuity.
**Automated Workload Conversion Made Easy**
One of Raven's standout features is its automated workload conversion capability. This tool automates many of the manual processes traditionally involved in data migration and transformation. By leveraging advanced algorithms and machine learning, Raven can efficiently translate schemas, refactor code, and optimize performance settings. This not only accelerates the migration process but also reduces the risk of human error, ensuring data integrity and system reliability throughout the transition.
**Benefits of Using Raven**
For businesses, the benefits of adopting Raven are manifold. Firstly, it significantly reduces the resources and time required for migration projects. By automating repetitive tasks, Raven frees up IT teams to focus on more strategic initiatives that drive innovation and growth. Moreover, its ability to handle complex data structures and diverse technologies makes it versatile across various industry sectors.
**Why Choose Raven Over Traditional Methods?**
Unlike traditional migration methods that rely heavily on manual intervention and scripting, Raven offers a scalable and repeatable solution. Its intuitive interface allows users to set parameters and dependencies, ensuring a tailored migration process that meets specific organizational needs. This level of customization not only enhances efficiency but also minimizes the learning curve for IT professionals tasked with executing migrations.
**Future-proofing Your Data Strategy**
As businesses continue to embrace digital transformation, the ability to adapt and scale efficiently becomes paramount. Raven not only facilitates seamless migration today but also future-proofs data strategies for tomorrow. Its agile framework and compatibility with emerging technologies ensure that enterprises can stay ahead in a rapidly changing landscape without the burden of legacy constraints.
**Conclusion**
In conclusion, Raven represents a significant advancement in the realm of automated workload conversion tools. By simplifying data migration, optimizing performance, and ensuring data integrity, Raven empowers businesses to embrace change with confidence. Whether migrating to the cloud or upgrading systems, Raven is poised to be the trusted companion for enterprises navigating the complexities of modern IT environments. Embrace the future of data transformation with Raven and unlock new possibilities for your business today.
| onixcloud |
1,915,875 | Swift Beginnings: A New Language Journey | Origin My journey with Swift began around two weeks ago. While many of my peers are immersing... | 0 | 2024-07-08T14:11:09 | https://dev.to/aaron_castillo3290/swift-beginnings-a-new-language-journey-26b1 | **Origin**
My journey with Swift began around two weeks ago. While many of my peers are immersing themselves in Python and Java to develop complex databases, I’ve chosen to focus on Swift to create an engaging iPhone game. I chose this path because I wanted to build something that resonates with a broad audience, not just other programmers. My goal is to craft a game that people find enjoyable and would want to revisit during their free time.
However, I’ve encountered a challenge: Swift isn’t as widely used as some other languages. According to the TIOBE Index, Swift ranks 12th in popularity among programming languages in 2024. As a result, I find myself relying more heavily on documentation and resources than I might with more popular languages. Despite this, I’m excited about the potential to create something both fun and accessible.
**What is Swift?**
Swift is an open-source programming language developed by Apple for building iOS, macOS, watchOS, tvOS, and beyond applications. It was first introduced at Apple's Worldwide Developers Conference (WWDC) in 2014.
The programming language I’m most comfortable with is JavaScript. My experience with JavaScript has provided me with a solid understanding of programming fundamentals, which has proven to be very beneficial as I begin learning Swift. Given this strong foundation, I expected Swift to be relatively straightforward, and I was right!
However, I’ve discovered that while Swift shares many core programming concepts with JavaScript, there are some notable nuances and differences worth mentioning. For instance, Swift’s syntax and type system are different from JavaScript. Swift is a statically typed language, meaning that types are checked at compile-time, which contrasts with JavaScript's dynamic typing. This shift requires a different approach to handling variables and functions, and understanding these nuances has been an important part of my learning process.
Also, Swift's stress on safety and performance introduces new concepts and practices, such as optionals, value types, and memory management, which are less prominent in JavaScript. These features are designed to enhance code reliability and efficiency, but they also involve a shift in mindset compared to what I’m used to.
Overall, while transitioning from JavaScript to Swift involves navigating some unique challenges, my background in programming has made the process manageable and even enjoyable. I look forward to continuing to explore and master Swift, appreciating how its distinct features contribute to building robust and efficient applications.
In JS, a constant variable is declared starting with const and regular variable with let.
In Swift, a constant variable is declared with let and a regular variable with var.
In JS, there is no type safety unless you're using TypeScript.
In Swift, there is type safety built-in. (but it's optional of course)
**Looking Forward**
As I continue learning Swift, I’m planning to dive into SpriteKit, a framework designed for 2D game development. SpriteKit simplifies creating games by providing built-in tools for handling sprites, animations, and physics. With SpriteKit, I can easily add and animate game elements, simulate realistic interactions like collisions and gravity, and create effects such as explosions and smoke.
The framework also integrates well with Apple’s technologies, including Metal for advanced graphics rendering. By exploring SpriteKit, I aim to develop engaging games that leverage Swift’s capabilities and take advantage of the powerful features provided by the Apple ecosystem.
Tips For Picking Up a New Coding Language
For those embarking on the journey of learning Swift or any new programming language, leveraging your existing programming knowledge can significantly ease the transition.
If you’re coming from a language like JavaScript, focus on understanding the unique aspects of the new language, such as Swift’s static typing and safety features.
Take advantage of the built-in tools and frameworks offered by the language, like SpriteKit for game development in Swift, to build practical projects that solidify your understanding.
Experiment with features specific to the new language, and integrate with related technologies, such as Metal for graphics in Swift, to gain deeper insights and hands-on experience.
Embrace the learning curve with patience and curiosity, and remember that each language has its own set of nuances that, once mastered, will enhance your overall programming proficiency.
Thanks for reading! Check back in for my next blog as I continue my journey learning Swift!
| aaron_castillo3290 | |
1,915,877 | User agent detection and the ua-parser-js license change | Written by Ikeh Akinyemi✏️ User agent detection plays an important role in helping developers... | 0 | 2024-07-11T14:02:14 | https://blog.logrocket.com/user-agent-detection-ua-parser-js-license-change | javascript, webdev | **Written by [Ikeh Akinyemi](https://blog.logrocket.com/author/ikehakinyemi/)✏️**
User agent detection plays an important role in helping developers optimize their websites and applications for various devices, browsers, and operating systems. By accurately identifying their users’ environments, developers can tailor their solutions to deliver the best user experience.
In this article, we’ll learn about user agent detection and explore the JavaScript library that has gained significant adoption among developers: [ua-parser-js](https://uaparser.dev/). ua-parser-js recently made headlines due to a change in its licensing model, and we’ll cover its switch from a permissive MIT license to a dual AGPLv3 + commercial license model, and how this affects individual and SaaS projects.
## What is user agent detection?
User agent detection is the process of identifying the specific software and hardware components your users are using to access your website or application. The detection involves information about the user’s browser name and version, operating system, device type, and more.
By leveraging the user agent detection, the developer can make informed decisions about how to present and optimize their users’ content, ensuring accessibility, tailored experiences, cross-browser and hardware compatibility, and possibly enhanced performance across the wide range of different platforms used.
## The ua-parser-js library and its recent changes
[ua-parser-js](https://github.com/faisalman/ua-parser-js) is a lightweight JavaScript library that simplifies user agent detection. This library was developed and maintained by [Faisal Salman,](https://github.com/faisalman) and it has gained strong adoption in the developer community due to its ease of use, extensive browser support, and reliable results.
With ua-parser-js, you can easily parse user agent strings and get precise information about the user’s browser, operating system, device, and more. The library provides a simple and intuitive API that can be easily integrated into your web projects.
In the following sections, we’ll learn about the ua-parser-js library, including its important features, installation methods, and usage examples. We’ll also discuss its recent licensing changes, which have sparked debates within the developer community.
## ua-parser-js installation and setup
The ua-parser-js library can be installed using various methods, depending on your development environment and preferences. With a lightweight footprint of approximately 18KB minified and 7.9KB gzipped, ua-parser-js can be easily integrated into both client-side (browser) and server-side (Node.js) environments.
To use ua-parser-js in an HTML file, you can simply include the library script in your HTML file:
```html
<!DOCTYPE html>
<html>
<head>
<script src="ua-parser.min.js"></script>
</head>
<body>
var parser = new UAParser();
<!-- Your content goes here -->
</body>
</html>
```
[Download the minified JavaScript file](https://github.com/faisalman/ua-parser-js/blob/master/dist/ua-parser.min.js), and include it in the same directory level as the HTML file. If you're using ua-parser-js in a Node.js environment, you can install it using npm:
```bash
npm install ua-parser-js
```
Then, in your Node.js script, you can require the library:
```javascript
const UAParser = require('ua-parser-js');
```
For TypeScript projects, you can install the library along with its type definitions using npm:
```yarn
npm install --save ua-parser-js @types/ua-parser-js
```
Then, in your `.ts` file, you can import the library:
```typescript
import { UAParser } from "ua-parser-js";
const parser = new UAParser()
```
### Usage and examples
The ua-parser-js library provides a simple API for parsing user agent strings and accessing the parsed data.
To parse a user agent string, you can create an instance of the `UAParser` object and call the `setUA` method with the user agent string:
```typescript
const parser = new UAParser();
parser.setUA('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36');
```
Once the user agent string is parsed, you can access the parsed data using the available methods provided by the `UAParser` object:
```typescript
const result = parser.getResult();
console.log(result.browser); // {name: "Chrome", version: "93.0.4577.82", major: "93"}
console.log(result.os); // {name: "Windows", version: "10"}
console.log(result.device); // {vendor: undefined, model: undefined, type: undefined}
```
The `getResult` method returns an object containing the parsed data, including information about the browser, operating system, device, CPU, and engine.
### Using extensions
ua-parser-js also allows you to extend its parsing capabilities by providing custom regular expressions and parsing rules. You can pass an array of extensions when creating a new instance of the `UAParser` object:
```typescript
const myExtensions = [
[/(myapp)\/([\w\.]+)/i, [UAParser.BROWSER.NAME, UAParser.BROWSER.VERSION]],
];
const parser = new UAParser(navigator.userAgent, myExtensions);
```
With these features and examples, you should have a good understanding of how to install, set up, and use ua-parser-js in your web development projects. In the next section, we'll explore the recent licensing changes surrounding ua-parser-js and their implications for developers and the open source community.
## The ua-parser-js license change
Recently, [ua-parser-js underwent a significant license change](https://github.com/faisalman/ua-parser-js/issues/680) that sparked discussions in the developer community. Before the change, ua-parser-js was initially distributed under the MIT license, which is known for its permissive nature. This license allowed developers to use, modify, and distribute the library with minimal restrictions, making it a popular choice for both open source and commercial projects.
ua-parser-js has grown in popularity, with over 2,240 dependent projects, and has been downloaded more than 12.3 million times. This growth has led to increased maintenance demands and a need for a more sustainable development model. The new licensing model aims to generate revenue to support ongoing maintenance and development efforts.
With the recent release of version 2.0, ua-parser-js adopted a dual licensing model: AGPLv3 (GNU Affero General Public License version 3) for the free and open source version and a proprietary, PRO license for commercial use. This change caused a significant shift in how developers can use and distribute ua-parser-js in their projects.
The dual licensing model tries to achieve a middle ground between maintaining an open source library and profiting from commercial users who might need extra functions or support. Currently, commercial projects are faced with a decision — they either abide by AGPLv3 license terms (which may require them to release their own source code) or buy a PRO license. The PRO license pricing starts from $12 for personal use and goes up to $500 for enterprise use. This model, often referred to as "open core," has been adopted by other projects in the open source ecosystem, such as Sidekiq, Mastodon, Nextcloud, and others.
There's been talk of potential forks of the MIT-licensed version or the development of alternative libraries. For example, Node.js TSC member Matteo Collina has already created a fork called [my-ua-parser](https://github.com/mcollina/my-ua-parser) to maintain an MIT-licensed version.
As you navigate this transition, it's important for you to understand the changes and consider how they might impact your projects. In the next section, we'll explore some strategies for dealing with this license change in your own work.
## Navigating the license change as a developer
When deciding which license to use, you need to consider your project's nature and requirements, reassess its dependencies, and make informed decisions to avoid the challenges that license change presents.
If your project is already using a compatible open source license, then the AGPLv3 version might be suitable. This means you’ll make your entire application's source code available if you distribute it or run it as a network service. However, keep in mind that using the AGPL version might limit the adoption of your project by others who can't comply with AGPL terms.
But if you're developing proprietary software or can't comply with AGPL terms, you should consider purchasing the PRO license; evaluate if the cost of the PRO license is justified by the benefits and features you need from ua-parser-js. Alternatively, you can continue using the v1.x branch or forks of ua-parser-js, which remains under the MIT license. But you should note that this version may receive limited updates in the future.
## Conclusion
For years, ua-parser-js has been appreciated as a valuable tool for web developers. Its ability to accurately parse user agent strings and provide detailed information about browsers, operating systems, and devices has made it an essential library for many of us.
The switch from the MIT license to a double AGPLv3 + PRO model undoubtedly caused a stir in the developer community. We witnessed a variety of responses to it; some [community members were understanding](https://github.com/faisalman/ua-parser-js/issues/680#issuecomment-2177944534) while others demonstrated [concern](https://github.com/faisalman/ua-parser-js/issues/680#issuecomment-1817421398) and [opposition](https://github.com/faisalman/ua-parser-js/issues/680#issuecomment-1819647275). To some, it would mean adjusting their projects to comply with the AGPLv3 license, while for others it might involve purchasing a PRO license or looking for alternative solutions.
As users of open source software, we need to be prepared for such changes and have strategies in place to adapt when necessary. | leemeganj |
1,915,878 | Why Are Leather Biker Jackets Cool? | The Timeless Appeal of Leather Biker Jackets Leather biker jackets have been a symbol of cool for... | 0 | 2024-07-08T14:16:02 | https://dev.to/monaljacketsuk/why-are-leather-biker-jackets-cool-24c0 | The Timeless Appeal of Leather Biker Jackets
Leather biker jackets have been a symbol of cool for decades. From their origins in the mid-20th century to their continued popularity today, these jackets have transcended fashion trends to become a staple in wardrobes worldwide. But what exactly makes leather [biker jackets](https://monaljackets.co.uk/biker-jackets) so undeniably cool?
A Rich History of Rebellion and Freedom
The leather biker jacket’s roots can be traced back to the 1920s when Irving Schott designed the first motorcycle jacket for Harley Davidson. Named the “Perfecto,” this jacket was created to protect motorcyclists while riding. Its association with freedom, rebellion, and the open road was cemented in the 1950s when Marlon Brando sported one in the iconic film "The Wild One." This connection to counterculture and the rebellious spirit of the 1950s and 1960s helped solidify the leather biker jacket's cool factor.
Enduring Style and Versatility
One of the primary reasons leather biker jackets remain cool is their timeless style. The classic design, characterized by asymmetrical zippers, wide lapels, and a snug fit, complements a variety of outfits. Whether paired with jeans and a t-shirt for a casual look or thrown over a dress for an edgy twist, a leather biker jacket adds an element of effortless cool to any ensemble.
Durability and Practicality
Leather biker jackets are not just about style; they are also incredibly practical. Made from high-quality leather, these jackets are designed to withstand the rigors of the road. Their durability ensures they can last for years, often looking better with age as the leather develops a unique patina. Additionally, the thick leather provides excellent protection against the elements and minor abrasions, making them a favorite among motorcyclists.
Celebrity Endorsement and Pop Culture Influence
Celebrities and pop culture have played a significant role in maintaining the cool status of leather biker jackets. Icons like James Dean, Elvis Presley, and later, musicians and actors across various genres, have embraced the leather jacket as a symbol of cool. More recently, celebrities like David Beckham, Rihanna, and Gigi Hadid have been spotted sporting leather biker jackets, reinforcing their status as a must-have fashion item.
A Statement of Individuality
Wearing a leather biker jacket often signifies a sense of individuality and personal style. It’s a garment that can be customized and worn in countless ways, allowing wearers to express their unique personalities. Whether adorned with pins, patches, or worn as-is, a leather biker jacket always makes a statement.
The Monal Jackets Difference
At Monal Jackets, we understand the enduring appeal of the leather biker jacket. Our collection features a variety of styles, from classic to contemporary, crafted from the finest materials to ensure both style and longevity. We pride ourselves on offering jackets that not only look cool but also stand the test of time. Whether you're a seasoned biker or simply a fashion enthusiast, Monal Jackets has the perfect leather biker jacket to suit your needs.
Conclusion
Leather biker jackets are cool for numerous reasons: their rich history, timeless style, durability, celebrity endorsement, and ability to express individuality. At [Monal Jackets](https://monaljackets.co.uk), we celebrate this iconic piece of fashion and strive to provide our customers with jackets that encapsulate everything that makes [leather biker](https://monaljackets.co.uk/biker-jackets) jackets so enduringly cool. Explore our collection today and find your perfect leather biker jacket.
| monaljacketsuk | |
1,915,879 | Become a Pro Programmer: 10 Tips and Strategies for Improving Your Coding Skills | As a programmer, it’s essential to continuously improve your skills to stay current and competitive... | 0 | 2024-07-08T14:28:37 | https://dev.to/akshayvs/become-a-pro-programmer-10-tips-and-strategies-for-improving-your-coding-skills-2364 | 
As a programmer, it’s essential to continuously improve your skills to stay current and competitive in the field. But with so many technologies, languages, and approaches, it can be overwhelming to know where to start. That’s why we’ve compiled a list of ten steps that you can take to improve your programming skills and achieve success.
From practicing regularly and working on projects to learning from others and staying up to date, these tips and strategies will help you maximize your potential and become a proficient programmer. So let’s get started on the path to programming mastery!
---
### 1. Set aside dedicated time to practice programming each week
Consistent practice is essential to improving your programming skills. You can become more familiar with different programming concepts and techniques by setting aside dedicated time to work on coding challenges or projects. This will help you build a strong foundation of knowledge and improve your problem-solving abilities
### 2. Work on projects to apply your knowledge and solve real-world problems
Building projects is a great way to apply your knowledge and practice solving real-world problems. It can be helpful to start with smaller, more manageable projects and gradually work your way up to larger, more complex projects. As you work on projects, you will have the opportunity to learn new technologies, improve your coding skills, and gain valuable experience.
### 3. Join online communities or seek out mentors to learn from more experienced programmers
There is a wealth of knowledge and experience available within the programming community. Don’t be afraid to seek help or guidance from more experienced programmers. You can join online communities, ask for feedback on your code, or seek out mentors to learn from their successes and mistakes. Collaborating with others can also be a great way to learn and improve your skills.
Some popular forums include [Stack Overflow](https://stackoverflow.com/), [Reddit’s r/learnprogramming subreddit](https://www.reddit.com/r/learnprogramming), and the forums on sites like [Codecademy](http://www.codecademy.com/) and [Coursera](https://www.coursera.org/).
### 4. Follow blogs, and online courses, or attend meetups and conferences to stay up to date with the latest trends and best practices in programming
Technology is constantly evolving, so it’s important to stay up to date with the latest trends and best practices in programming. This could involve following blogs or online courses, attending meetups or conferences, or simply staying informed about new technologies and programming languages. By staying current, you can ensure that you are using the most effective tools and techniques for your projects.
### 5. Try out new technologies and programming languages to expand your skillset and become more versatile
There is always more to learn in programming. Don’t be afraid to try out new technologies and programming languages. The more you learn, the more diverse and versatile you will become as a programmer. This will allow you to tackle a wider range of problems and projects, and it can also make you a more valuable asset to potential employers.
### 6. Write clean, well-documented code to improve your readability and maintainability
Writing clean, well-documented code is essential for improving the readability and maintainability of your code. By following best practices like using clear, descriptive variable and function names, adding comments to provide context and explanations, and following a consistent style guide, you can make your code easier to understand and work with.
Additionally, documenting your code with inline comments or external documentation tools can provide even more context and make it easier for others to understand and collaborate on your code. Overall, writing clean, well-documented code is an important skill that can help you write more efficient and effective code, and make it easier to work with others.
### 7. Use version control to track changes to your code and collaborate with others
Version control systems like Git allow you to track changes to your code and collaborate with others. By using version control, you can more easily manage and organize your code, and you can also roll back changes if necessary.
### 8. Participate in coding challenges or hackathons to test your skills and learn from others
Coding challenges and hackathons are great ways to test your skills and learn from others. These events can provide a fun and challenging environment in which you can practice your coding skills and work on projects with other participants. They can also be a great opportunity to learn from others and to see how other programmers approach problems and solve challenges.
Coding challenges are typically short-term events that involve completing a set of coding tasks or problems within a certain timeframe. These challenges can be focused on a specific programming language or technology, or they can be more general. Coding challenges can be a great way to test your skills and learn from others in a low-pressure environment.
Hackathons are longer-term events that typically involve working on a project or building a prototype over some time, often 24–48 hours. Hackathons can be a great way to work on projects with a team, learn new technologies, and gain experience in a fast-paced environment.
### 9. Work on open-source projects to gain experience and contribute to the community
Working on open-source projects is a great way to gain experience and improve your skills, as well as to contribute to the community. Contributing to open-source can be a great way to learn from others and gain experience working on real-world projects.
### 10. Keep learning and be willing to adapt to new technologies and approaches as they emerge
Last, but not least, it’s essential to continuously learn and adapt to new technologies and approaches to stay current and competitive in the field. This can involve following industry news and updates to stay informed about the latest trends and developments, taking online courses or attending workshops to learn new skills and technologies, joining online communities to connect with other programmers and learn from their experiences, and seeking out mentors who can provide guidance and support as you learn and grow.
By staying current and open to new approaches, you can ensure that you are using the most effective tools and techniques for your projects and continue to grow as a programmer. Additionally, being open to learning new technologies and approaches can make you more versatile and valuable as a programmer, and it can also help you tackle a wider range of problems and projects.
---
Improving your programming skills is a continuous process that requires dedication, practice, and a willingness to learn and adapt. By following the steps outlined in this guide, you can set yourself on the path to programming mastery and achieve success in your career.
Whether you’re just starting or you’re an experienced programmer looking to improve your skills, these tips and strategies can help you maximize your potential and become a proficient programmer.
So set aside dedicated time to practice, work on projects to apply your knowledge, seek out mentors and online communities, stay up to date with the latest trends and best practices, try out new technologies and languages, write clean and well-documented code, use version control, participate in coding challenges and hackathons, work on open-source projects, and keep learning and adapting to new technologies and approaches.
By following these steps, you can become a pro programmer and achieve success in your career. | akshayvs | |
1,915,881 | Deploy Angular App with GraphQL on IIS & Azure (Part 7) | TL;DR: This blog contains the step-by-step procedures for deploying the full-stack web app built with... | 0 | 2024-07-11T17:04:39 | https://www.syncfusion.com/blogs/post/deploy-graphql-angular-on-iss-azure-7 | angular, development, web, azure | ---
title: Deploy Angular App with GraphQL on IIS & Azure (Part 7)
published: true
date: 2024-07-08 11:16:14 UTC
tags: angular, development, web, azure
canonical_url: https://www.syncfusion.com/blogs/post/deploy-graphql-angular-on-iss-azure-7
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xf2okpemp57ftgxma1km.png
---
**TL;DR:** This blog contains the step-by-step procedures for deploying the full-stack web app built with Angular and GraphQL on both IIS and Azure App Service. It covers preparing the application and enabling IIS on Windows, configuring Azure resources, and troubleshooting common hosting issues.
Welcome to our exciting journey of building a full-stack web application using Angular and GraphQL. In the [previous article of this series](https://www.syncfusion.com/blogs/post/build-dynamic-watchlist-angular-graphql "Blog: Build a Dynamic Watchlist for Your Web App with Angular & GraphQL (Part 6)"), we added the watchlist feature to our application, allowing users to add or remove movies from their watchlist. In this article, we will learn to deploy our MovieApp on both IIS and Azure App Service.
Let’s get started!
## Prepare the app for deployment
To prepare our app for deployment, we must ensure it builds successfully. The Angular app has a budget for the bundle size in the **angular.json** file.
To run the application, navigate to the **clientApp** folder and use the following command.
```
ng build
```
You will see a bundle size exceeded error, as shown in the following screenshot.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Bundle-size-limit-reached-See-error.png)
This error indicates that the current bundle size is 4.94 MB, exceeding the allotted 1 MB budget. To resolve this issue, update the bundle budget to 5 MB in the **angular.json** file.
```js
"production": {
"budgets": [
{
"type": "initial",
"maximumWarning": "500kb",
"maximumError": "5mb" // Update this line.
},
// Existing code.
],
},
```
Next, update the GraphQL server endpoint URL in the **src\app\graphql.module.ts** file. Refer to the following code to update the URL.
```js
const uri = 'https://localhost:7214/graphql'; // Current code.
const uri = 'graphql'; // update to this.
```
We’re hosting the GraphQL server and the Angular client app on the same server. As a result, the base URL for the server and client will be identical. This modification ensures that our client can connect with the server using the **graphql** endpoint name on the same base URL.
However, if your GraphQL server and Angular client app are hosted on different servers, you’ll need to provide the complete GraphQL server URL.
## Enable IIS on a Windows machine
To enable Internet Information Services (IIS) on Windows 11, follow these steps:
1. Click on the search icon in the Windows 11 taskbar.
2. Type **features** and select **Turn Windows features on or off**. This will open a window where you can enable various built-in options, including IIS.
3. Expand the IIS option by clicking on the plus icon. Under it, you will find a list of suboptions.
4. Select all the options under Web Management Tools & World Wide Web Services. This will not only install IIS but also the IIS Management Console, along with application features, common HTTP features, health diagnosis, performance, and security features.
5. After selecting these options, click **OK**. Refer to the following image.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Enable-IIS-on-Windows-11-Search-Features-install-Internet-Information-Services..png)
## Install URL rewrite module
The URL rewrite module allows site administrators to create custom rules for implementing user-friendly and SEO-enhanced URLs. To get started, check out the detailed instructions in the [Microsoft documentation](https://learn.microsoft.com/en-us/iis/extensions/url-rewrite-module/using-the-url-rewrite-module "Using the URL Rewrite Module").
Follow these steps to install it:
1. Navigate to the [URL Rewrite download page](https://www.iis.net/downloads/microsoft/url-rewrite "URL Rewrite download page").
2. Scroll down to the **Download URL Rewrite Module 2.1** section.
3. Download the installer that matches your preferred language and machine configuration. Refer to the following image.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Upgrade-your-URLs-Install-URL-Rewrite-for-SEO-friendly-links..png)
## Install the .NET Core hosting bundle
To power your app with the .NET Runtime and IIS support, you’ll need the .NET Core Hosting Bundle. First, navigate to the [.NET downloads page](https://dotnet.microsoft.com/en-us/download/dotnet "Download .NET") and click on the latest stable version of .NET. For our app, we are using **.NET 8.0**, so select that option.
Refer to the following image.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/NET-Core-Bundle-for-IIS-apps-select-.NET-version.png)
On the download page, you’ll find a table under the Download .NET (Current) heading listing .NET versions. Focus on the Run apps – Runtime column to find the row for the specific .NET Core runtime version you need.
In that row, under the **Windows** section, locate the **Hosting Bundle** link. This link will take you to the download page for the .NET Core Hosting Bundle for Windows.
Refer to the following image.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/NET-Core-Windows-Hosting-Bundle-in-Run-apps-table..png)
**Note:**
**1.** Install the .NET Core hosting bundle after installing IIS.
**2.** Restart your machine after installing the .NET Core Hosting Bundle.
## Publish the MovieApp app to IIS
After setting up IIS and installing the necessary dependencies, it’s time to publish the **MovieApp** project:
1. In your solution explorer, right-click on the **MovieApp** project.
2. From the context menu, choose **Publish**.
3. In the Publish window, click on the **Folder** link, then click **Next** to proceed.
4. In the next window, specify the path of the folder where you want to publish your app. Click **Finish** to confirm your selection.
5. Finally, click **Publish** at the top of the window to start the building and publishing process. Refer to the following image.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Publish-MovieApp-Right-click-then-click-Publish-then-Folder.gif)
If there are no build errors, the app will be successfully published to the specified folder path.
## Configuring IIS
To start with IIS, open the Windows 11 search box, type **IIS**, and select **Internet Information Services (IIS)** to launch the web server manager.
To create a website:
1. Open IIS Manager. Right-click on **Sites** in the left pane.
2. Select **Add Website**.
3. Fill in the details:
- **Site name:** Enter a name for your site. For this demo, we’ll use **MovieApp**.
- **Physical path:** Provide the folder path where you’ve published the MovieApp project.
- **Hostname:** Specify the URL to access the app. For this demo, we’ll use **movieapp.**** com**.
4. Click **OK** to create the website.
Next, you’ll need to configure the application pool:
1. Application pools are automatically created for each website using the site name you provided. Locate the pool corresponding to your website.
2. Double-click on the pool. In the edit Application Pools window, select **No Managed Code** from the .NET CLR version dropdown.
3. Click **OK** to save the settings. Refer to the following image.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Configure-App-Pool-No-Managed-Code-.NET-Core.-See-image..gif)
## Configure the DNS host
To configure the host file on your machine in Windows 11, follow these steps:
1. Before editing the host file, create a backup. This ensures that you can restore it if anything goes wrong.
2. Launch File Explorer and navigate to **C:\Windows\System32\drivers\etc**.
3. Copy the host file to another location for safekeeping.
4. Open Notepad as an administrator.
5. In Notepad, go to the **File** menu and choose **Open** (or use the shortcut **Ctrl+O** ).
6. Paste the host file path ( **C:\Windows\System32\drivers\etc\hosts** ) into the File name field in the Open dialog box. Press **Enter** to open the hosts’ file.
7. Once the host file is open in Notepad, scroll to the end of the file and add a new line.
```
127.0.0.1 movieapp.com
```
8.After adding this line, save the file. Refer to the following image.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Edit-Windows-11-host-file-backup-first..png)
## Execution demo
After executing the previous code examples, open a browser and navigate to the URL [http://movieapp.com/](http://movieapp.com/ "MovieApp Site"). You will see the output as shown in the following image.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Run-code-visit-link.jpg)
## Troubleshooting common hosting issues with IIS
Let’s address the common issues related to hosting a .NET app.
### DNS Not Found Error
If you encounter a DNS not found error when trying to open your website, try the following:
- Verify that the hostname (URL) is correctly configured in the host file located at **C:\Windows\System32\drivers\etc\hosts**.
- Ensure your machine is not connected to any VPN server, as it might interfere with DNS resolution.
- If you’re using a web proxy, temporarily disable it and try accessing the website again.
### HTTP error 500.19 – internal server error
This error occurs due to invalid configuration data. To resolve it:
- Ensure proper permissions for the publish folder (where your app resides).
- Grant read permission to the IIS\_IUSRS group in the publish folder to access the **Web.config** file and other necessary files.
### 500 internal server error with data not populated
Your website loads, but data isn’t populated, and you encounter a 500 internal server error. To resolve this:
- Ensure that your database connection string is in the correct format.
- The user ID specified in the connection string should have both **db\_datareader** and **db\_datawriter**
- If the issue persists, consider providing the user with **db\_owner** permission to troubleshoot.
**Note:** If you republish the app, remember to refresh your website and the application pool in IIS.
With these issues addressed, we’ve successfully deployed the app to IIS. Now, let’s see how to deploy it to Azure App Service.
## Create a SQL Server resource on the Azure portal
Before proceeding, ensure you have an [Azure subscription](https://azure.microsoft.com/en-in/ "Azure subscription") account.
### Step 1: Create SQL Server on Azure
We will create an SQL Server on Azure to handle our database operations.
Follow these steps:
1. Open the [Microsoft Azure portal](https://portal.azure.com/#home "Microsoft Azure").
2. Click **Create a resource**.
3. Locate the SQL Database resource and click **Create**. Refer to the following image.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Azure-SQL-setup-Portal-then-click-on-Create-a-resource-locate-the-SQL-Database-and-create.png)
### Step 2: Fill in SQL Database details
A new page with the title “ **Create SQL Database** ” will appear. On this page, you will be asked to furnish the details for your SQL database.
Provide the details in the basics tab as follows:
- **Subscription:** Choose your Azure subscription type from the dropdown menu.
- **Resource group:** You have two options:
- **Select an Existing Resource Group:** If you already have a resource group, pick it from the list.
- **Create a New Resource Group:** If you don’t have one, click **Create new** and provide a name for your new resource group (e.g., myResourceGroup).
- **Database name:** Assign a unique name to your database, ensuring that the name adheres to the following validation criteria.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Name-your-database-unique-follows-validation-rules..png)
### Step 3: Server configuration
Click on **Create new** to set up a new server for your database. In the subsequent window, provide the following details.
- **Server name:** Choose a unique server name (e.g., **movieappdb** ). Ensure that server names must be globally unique across all Azure servers.
- **Location:** Select a location from the dropdown list.
- **Authentication method:** Select an authentication method according to your requirements. In this demo, we will use SQL authentication.
- **Server admin login:** Set a username (e.g., adminuser).
- **Password:** Create a password that meets the requirements and confirm it. Refer to the following image to understand the server configuration.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Azure-SQL-New-server-name-location-login..png)
### Step 4: Configure database performance and storage
1. Click on **Configure database**. In the next window, select the desired service tier option from the dropdown.
2. Click **Next** until you reach the **Review + create** tab.
3. Finally, click **Create** at the bottom of this tab to initiate the creation of your SQL database. Refer to the following image.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Configure-SQL-Database-tier-next-review-create..gif)
## Update the connection string in the application
With your database now created, the next step is to set up access so that you can connect to it from your local machine and other Azure resources, like the Azure App Service.
To allow the required access to the database:
1. Click **Set Server Firewall** at the top of the page.
2. Under the **Public access** tab, select **Selected networks**.
3. Scroll down to the **Exceptions** section. Check **Allow Azure services and resources to access this server** checkbox.
4. Click **Save** at the bottom.
To connect to this database, you will need a connection string. First, click on **Connection strings** in the **Settings** menu on the left side of the page. Then, select the **ADO.NET** tab and copy the provided connection string.
Refer to the following image.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Connect-to-Azure-database-Firewall-connection-string..gif)
Next, open the **appsettings.json** file in your project and replace the existing local database connection string with the connection string of Azure SQL Server.
Refer to the following code.
```js
"ConnectionStrings": {
"DefaultConnection": "Server=tcp:moviedbserver.database.windows.net,1433;Initial Catalog=MovieDB;Persist Security Info=False;User ID={YourUserID};Password={YourPassword};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
},
```
## Creating database objects for Azure SQL Database
In Azure SQL Database, the tables needed for our application aren’t automatically created. Let’s walk through the process of creating them.
### Step 1: Connect to Azure SQL server
Launch the **SQL Server Management Studio** and connect to your Azure SQL Server by providing the following details:
- **Server name**: Use the server name from the connection string.
- **Authentication**: Choose SQL Server authentication.
- **Login**: Enter the server’s user ID.
- **Password**: Provide the password associated with the server.
- Click **Connect** to proceed.
### Step 2: Setting up firewall rules
When connecting from your local machine, you’ll be prompted to set up a firewall rule. Follow the steps mentioned below:
- Log in with your Azure account credentials.
- Select **Add my client IP address**.
- Click **OK** to confirm.
Refer to the following image.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Azure-Firewall-Allow-local-machine.png)
### Step 3: Creating tables
Once connected, create all the necessary tables in the database. The SQL scripts for table creation are in the [Configure the database section of Part 1](https://www.syncfusion.com/blogs/post/full-stack-web-app-angular-graphql#configure-the-database "Blog: A Full-Stack Web App Using Angular and GraphQL: Part 1") of this series. Execute these scripts to generate the tables.
### Step 4: Running the application
Now, launch the application from Visual Studio. Initially, it will display no movie data, but you can view genre data. Refer to the following image.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Run-app-no-movies-yet..png)
## Publish the MovieApp app to Azure
To publish your MovieApp project:
1. Right-click on the MovieApp project in your solution explorer.
2. Select **Publish** from the context menu. This action will open a Publish window.
3. In the Publish window, choose **Azure** and click **Next**.
4. You’ll be prompted to connect to your Azure account from Visual Studio. Once the login is successful, it will ask you to select the Azure service to host the application. Choose **Azure App Service** (Windows) and click **Next**.
5. The next window will display all the app services associated with your account. Since we want to create a new app service for this app, click **Create New**.
In the new window, provide the following details for your app service:
- **Name:** Choose a name that is universally unique and not used by anyone else.
- **Subscription name:** Select the appropriate subscription from your account configuration.
- **Resource group:** Choose a resource group or create a new one.
- **Hosting Plan:** You can either select an existing plan or create a new one. Refer to the following image.
[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Publish-MovieApp-to-Azure-App-Service-Windows..png)
Then, click **Create** to initiate the creation of your Azure App Service instance. Once the app service is created, select its name and click **Finish**.
On the next page, review all the provided information. Click **Publish** at the top to start building and publishing your app. After the publication is successful, the URL will automatically launch in your machine’s default browser.
The website URL will be <**app service name**>**.azurewebsites.net** , and the webpage looks like the following image.[](https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Deploy-app-to-Azure-Click-Publish-website-opens-in-your-browser..png)
Since the database tables are empty, no data is displayed on the page. To correct this, add an entry for the **Admin** user in the UserMaster database, as described in the [Update the database section of Part 5 of this series](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-5 "Blog: A Full-Stack Web App Using Angular and GraphQL: Adding Login and Authorization Functionalities (Part 5)").
Log in as the admin user to access the app and add some movie data to populate the database tables. Once you’ve added data, you can validate that the app is works as expected.
## Execution demo
The application created in this series of articles is available at [MovieApp](https://movieapp-angular-graphql.azurewebsites.net/ "MovieApp demo on Azure").
## GitHub resource
For more details, refer to the complete source code for the [full-stack web app with Angular and GraphQL on GitHub](https://github.com/SyncfusionExamples/full-stack-web-app-using-angular-and-graphql "Full-Stack web app with Angular and GraphQL GitHub demo").
## Summary
Thank you for reading this article. In it, we learned to configure the IIS on a Windows machine and deploy a full-stack .NET app on it. We created an SQL Server database on Azure and configured it to be used as the database provider in our app. Finally, we deployed the app on the Azure app service.
Whether you’re already a valued Syncfusion user or new to our platform, we extend an invitation to explore our [Angular components](https://www.syncfusion.com/angular-components "Syncfusion Angular components") with the convenience of a [free trial](https://www.syncfusion.com/downloads/angular "Get the free evaluation of the Essential Studio products"). This trial allows you to experience the full potential of our components to enhance your app’s user interface and functionality.
Our dedicated support system is readily available if you need guidance or have any inquiries. Contact us through our [support forum](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/angular "Syncfusion Feedback Portal"). Your success is our priority, and we’re always delighted to assist you on your development journey!
## Related blogs
- [A Full-Stack Web App Using Angular and GraphQL: Part 1](https://www.syncfusion.com/blogs/post/full-stack-web-app-angular-graphql "Blog: A Full-Stack Web App Using Angular and GraphQL: Part 1")
- [A Full-Stack Web App Using Angular and GraphQL: Data Fetching and Manipulation (Part 2)](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-2 "Blog: A Full-Stack Web App Using Angular and GraphQL: Data Fetching and Manipulation (Part 2)")
- [A Full-Stack Web App Using Angular and GraphQL: Perform Edit, Delete, and Advanced Filtering (Part 3)](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-3 "Blog: A Full-Stack Web App Using Angular and GraphQL: Perform Edit, Delete, and Advanced Filtering (Part 3)")
- [A Full-Stack Web App Using Angular and GraphQL: Adding User Registration Functionality (Part 4)](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-4 "Blog: A Full-Stack Web App Using Angular and GraphQL: Adding User Registration Functionality (Part 4)")
- [A Full-Stack Web App Using Angular and GraphQL: Adding Login and Authorization Functionalities (Part 5)](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-5 "Blog: A Full-Stack Web App Using Angular and GraphQL: Adding Login and Authorization Functionalities (Part 5)")
- [Build a Dynamic Watchlist for Your Web App with Angular & GraphQL (Part 6)](https://www.syncfusion.com/blogs/post/build-dynamic-watchlist-angular-graphql "Blog: Build a Dynamic Watchlist for Your Web App with Angular & GraphQL (Part 6)") | jollenmoyani |
1,915,882 | Testing Kubernetes Applications with Pytest and Testkube: A Complete Guide | Testing modern distributed applications within Kubernetes environments can be daunting due to the... | 0 | 2024-07-08T14:21:12 | https://testkube.io/learn/testing-kubernetes-applications-with-pytest-and-testkube-a-complete-guide | Testing modern distributed applications within Kubernetes environments can be daunting due to the complexity and the need for scalable solutions. Traditional testing tools often fall short when it comes to efficiency and agility.
However, with the advent of Kubernetes native solutions like Testkube, it's easier than ever to integrate powerful testing frameworks such as Pytest into your testing workflows. In this comprehensive guide, we'll explore how to leverage Testkube with Pytest to streamline your testing processes in Kubernetes.
## Pytest - An Overview
Python remains a top choice for programming among developers due to its simplicity and robust ecosystem. Pytest, a popular framework within this ecosystem, excels in testing Python-based applications, but not only. It is preferred for its minimalistic design, flexibility, and rich feature set, which makes it ideal to test any type of application. It includes:
- **Minimal Design**: Pytest reduces boilerplate code, facilitating quick and easy test case creation.
- **Flexibility**: Its modular nature and extensive plugin ecosystem allow for significant customization.
- **Rich Feature Set**: Features like auto-discovery, assertive properties, and module fixtures streamline test management.
- **Diverse Testing**: Pytest supports both unit and integration tests, ensuring you are covered in most testing scenarios.
While Pytest is robust for testing your applications, integrating it into Kubernetes can pose challenges such as scaling and parallel execution. This is where Testkube comes into play.
## Why Use Testkube To Run Pytests in Kubernetes?
Testkube is specifically designed to orchestrate and scale Pytest executions within Kubernetes, taking full advantage of Kubernetes' scalability, flexibility, and efficiency. Here's why it stands out:
- **Kubernetes Native**: By storing tests as Kubernetes Custom Resource Definitions (CRDs), Testkube ensures compatibility and scalability.
- **Integration with CI/CD Tools**: Testkube seamlessly integrates with existing CI/CD pipelines, enhancing end-to-end testing capabilities.
- **Simplified Workflow Creation**: Without the need for complex scripting, Testkube facilitates the creation of detailed test workflows, allowing for better control and customization of test executions.
## Creating a Pytest Test Workflow
We've created a custom Pytest image for this example, but you can also create your own. For all the files and examples shown on this blogpost, refer to this [Pytest folder](https://github.com/kubeshop/testkube-examples/tree/main/Pytest-Test-Workflow).
To demonstrate the power of Testkube with Pytest, let's create a simple test workflow. We first create a Pytest test that checks an API endpoint for the correct number of objects returned.
```python
import pytest
import requests
def test_validate_object_count():
# Send a GET request to the API endpoint
response = requests.get("https://api.restful-api.dev/objects")
# Assert that the response status code is 200 (OK)
assert response.status_code == 200
# Parse the JSON response
data = response.json()
# Validate the number of objects in the response
assert len(data) == 13, f"Expected 13 objects, but received {len(data)}"
```
Below are the steps to set up a Test Workflow in Testkube:
1. **Prepare Your Kubernetes Cluster**: Ensure your cluster has the Testkube agent installed and configured.
2. **Navigate to Test Workflows**: In the Testkube dashboard, click on "Add A New Test Workflow" and select "Create From Scratch".
3. **Workflow Configuration**: Follow the wizard to set up your workflow. Provide the test's image, the shell command (pytest test-api-endpoint.py), and the Git repository details.

On the next screen, you have to define the source of your test.
- Choose Git from the drop-down.
- Provide the path to the Git repository that has the test.
- Provide a path if the test file isn't in the root folder.
- Check the "Use path as working directory" checkbox.

The wizard's last page shows you the final yaml spec generated based on the values you provided.

Below is the yaml spec for the Pytest workflow:
```yaml
kind: TestWorkflow
apiVersion: testworkflows.testkube.io/v1
metadata:
name: pytest
namespace: testkube-agent
spec:
content:
git:
uri: https://github.com/kubeshop/testkube-examples.git
revision: main
paths:
- Pytest-Test-Workflow
container:
workingDir: /data/repo/Pytest-Test-Workflow
image: atulinfracloud/pytest-executor:latest
steps:
- name: Run test
shell: pytest test-api-endpoint.py
```
Click on "Create" to create and save the Test Workflow.
## Executing a Test Workflow
Click on "Run Now" to run the workflow. Clicking on the respective execution will show you the logs, artifacts, and the underlying workflow file.

Creating a Test Workflow in Testkube is straightforward and simple. We saw that just from one yaml file, we can manage everything related to our test - code, environments, resources, and artifacts. This makes your testing process and workflows more efficient and robust.
## Summary
Pytest is one of Python's most popular testing frameworks, and Testkube is the only native Kubernetes test orchestration framework. Leveraging both these tools together streamlines your testing process for Kubernetes applications.
As we saw in this post, developers can benefit from Testkube's Kubernetes capabilities and Pytest's flexibility in creating efficient Test Workflows. You can also bring in any other testing tool and create a Test Workflow, not just Pytest.
If you already have a testing tool and want to experience Test Workflows, visit our [website](https://testkube.io) to learn more about Testkube's capabilities and how it can transform your testing workflow. You can also join our [Slack channel](https://testkube.io/slack) for any assistance. | michael20003 | |
1,915,884 | Writing a You Go - Technical Article Writing Made Easy | If you are a working developer like I am, you should never have a problem coming up with an article... | 0 | 2024-07-08T14:22:16 | https://www.stephanmiller.com/writing-as-you-go/ | writing | ---
title: Writing a You Go - Technical Article Writing Made Easy
published: true
date: 2024-07-08 06:06:00 UTC
tags: writing
canonical_url: https://www.stephanmiller.com/writing-as-you-go/
---
If you are a working developer like I am, you should never have a problem coming up with an article idea.
Well, I am one and often do. Sometimes I forget the things I write about in this article because they are rattling around in my head. So I panic when I can’t seem to come up with an idea for a technical article out of the blue. Do I really want to write the same [article about JavaScript reduce](https://dev.to/eristoddle/javascript-reduce-a-complete-guide-to-the-only-js-array-function-you-really-need-334a) that is available on seemingly 100 other websites?
So with this article, I am trying to get them down on paper and be more systematic in the future.
You might not have a market for the article ideas you come up with, but if you use the methods here to come up with them, there will be an audience waiting for them. And if you want to write the 512th iteration of a [JavaScript reduce article](https://dev.to/eristoddle/javascript-reduce-a-complete-guide-to-the-only-js-array-function-you-really-need-334a), I can show you how to make it better than the others.
Disclaimer: This is a work in progress. I only recently became less of a dumbass and started learning some of these lessons. [Writing on a deadline with editors and standards](https://dev.to/eristoddle/how-i-made-4000-in-a-single-month-freelance-writing-on-the-side-4mb8-temp-slug-9266261) helped with that.
## Table of Contents
- [How to Generate Technical Article Ideas](#how-to-generate-technical-article-ideas)
- [Write an Article When You Can’t Find Something](#write-an-article-when-you-cant-find-something)
- [Write an Article When You Do Something New](#write-an-article-when-you-do-something-new)
- [Write an Article to Store Your Own Notes](#write-an-article-to-store-your-own-notes)
- [Tools That Can Help With Writing Technical Articles](#tools-that-can-help-with-writing-technical-articles)
- [tabXpert](#tabxpert)
- [Github Gists](#github-gists)
- [Online IDEs](#online-ides)
- [DBeaver (for SQL)](#dbeaver-for-sql)
- [A Quick Screenshot Tool](#a-quick-screenshot-tool)
- [Joplin (or Evernote or Something Like It)](#joplin-or-evernote-or-something-like-it)
- [Google Search History](#google-search-history)
- [Editing Your Technical Articles Before You Publish](#editing-your-technical-articles-before-you-publish)
- [What Did You Miss in Your Article?](#what-did-you-miss-in-your-article)
- [What Are People Looking For?](#what-are-people-looking-for)
- [Does the Article Sound Right?](#does-the-article-sound-right)
- [Does it Play Well with Google?](#does-it-play-well-with-google)
- [My Technical Article Backup Plan](#my-technical-article-backup-plan)
- [Promoting Your Technical Article](#promoting-your-technical-article)
- [Conclusion](#conclusion)
## How to Generate Technical Article Ideas
### Write an Article When You Can’t Find Something
I know you’ve been there. You have a Google search open and you’re using Ctrl click to open the results in new tabs, because you can tell from the SERPs that the answers to your question are going in ten different directions. The majority of those open tabs will most likely be Stack Overflow threads, exponentially increasing the amount of things you have to try to fix your problem.
This is when you start taking notes. This is where you can help the next person that is having your problem find a much shorter route to a solution.
Or maybe you are working on something bigger, where you have to have one technology to work with another and you have quite a few resources open trying to figure it out. The point here is that all the pieces you need aren’t all on the same page.
Here are some examples of blog posts I wrote that were born this way:
- [How to Store JSON in PostgreSQL](https://dev.to/eristoddle/how-to-store-json-in-postgresql-2eb0): I discovered I could do this and wanted to learn how. I also discovered when I searched for any specific thing about it, I ended up on a new page. Not one single page broke down everything you could do simply and there were often 2-4 ways of doing the same things, which was confusing. So, I broke it down for myself, writing an article while I did. I get 5-10 visitors a day to this article.
- [Creating an Android EULA Activity](https://dev.to/eristoddle/creating-an-android-eula-activity-15f4): When I wrote this I was new to both Android and mobile development. If you have been doing it for a while, this article would never been needed. It is a pretty simple concept. But when I went to look for anything that would just show be a simple set of steps to do it, I couldn’t find it. So I figured it out and wrote about it. I also [reposted it to Medium](https://medium.com/@eristoddle/creating-an-android-eula-activity-52281913866). For years, the post on my blog and the Medium post were ranked #1 and #2 for “[creating an Android EULA activity](https://www.google.com/search?q=Creating+an+Android+EULA+Activity)” and many phrases close to it in Google. There is a canonical tag, so I am not sure how this happens, but I’m good with it.
- [JavaScript Reduce - A Complete Guide to the Only JS Array Function You Really Need](https://dev.to/eristoddle/javascript-reduce-a-complete-guide-to-the-only-js-array-function-you-really-need-334a): This one I wrote because I knew a lot of developers that didn’t use reduce at all because they didn’t understand it. I remembered being in that boat and I remembered when it finally clicked and knew I had been missing a lot not using this method. I knew I had a lot of competition in the search engines, so I spent some time listing everything I could remember running into dealing with JavaScript reduce and wrote a 3500 word article about it and it gets some traffic.
And if you try 50 things before the one that worked, you could also turn it into a story, mentioning every wrong step you took, because there will be others taking the same wrong steps and searching for why. The trail of keywords (AKA breadcrumbs) could lead them to your post. Here is where I did that:
- [A Tale of Saving a Windows Installation with Linux](https://www.stephanmiller.com/Saving-A-Dead-Windows-Installation-With-Linux/): This was a rant that I tried to turn into comedy and ended up seeding it with a lot of the keywords you would use if you had this problem for the first time, especially if you had already tried half a dozen things.
This type of thing sometimes happens when I am writing an article for a client and then you just have to roll with it, figure out how to fix the issue, and [mention it in the article](https://blog.logrocket.com/leveraging-react-server-components-redwoodjs/#:~:text=This%20is%20the%20step%20where%20I%20ran%20into%20this%20error).
### Write an Article When You Do Something New
Jargon is where understanding goes to die. It is a label you use in an industry to abstract away a concept you already know so you can get it out of the way to do other things. And it hides a lot of details that a newbie has no clue about.
When I am new to something, I would rather learn from the person who just did what I am trying to do rather than an expert who knows everything about it and can see 50 steps ahead of me. Why?
Because he already is 50 steps ahead of me and knows not to step on the sketchy-looking board in the bridge that will send me plunging to my death…but doesn’t tell me, because from where he stands, it’s common knowledge.
But it’s not for me.
And when it’s not, sometimes using the incorrect term at the right place beats using the correct term (jargon). And when you spent the last two hours on a “simple” step in an experts article, because of “oh, I thought everyone should know that”, putting that part in your article, explaining it as well as you can, and getting ridiculed by the expert for it is worth it. Because in the end, you will be helping newbies learn and the expert, just confusing them.
Here are some articles I wrote when I realized that I could just turn what I was currently doing and learning into an article.
- [How I Finally Ditched Evernote for Joplin](https://dev.to/eristoddle/how-i-finally-ditched-evernote-for-joplin-2kjb): I had tried this once before but realized that I had Evernote everywhere. In the web, in my browser, in my phone, and in my tablet, which wasn’t the same OS as my phone. So I rolled back. When I tried again, I had more time, but it didn’t really take that long, so I wrote about it.
- [Creating Full Stack Dapps with Truffle Boxes](https://www.stephanmiller.com/creating-full-stack-dapps-with-truffle-boxes/): I have found that cryptocurrency confuses everyone, including developers. So you have money and you can write code that moves it around? What about bugs? Can hundreds or thousands of dollars disappear in a second because of a simple accident? So I started to figure it out and wrote about it.
- [How to Add Search to Your Static Site Generator](https://dev.to/eristoddle/how-to-add-search-to-your-static-site-generator-jekyll-hugo-gatsby-nikola-etc-23pd): I had tried this before with a Jekyll plugin and couldn’t get it to work, but wasn’t too focused on it. Fast forward a few years and I learned about [Lunr.js](https://lunrjs.com/), added search, and the results surprised me. It worked great, didn’t involve third party services, and didn’t take that long at all to implement. So I wrote about it.
- [Creating an Android EULA Activity](https://dev.to/eristoddle/creating-an-android-eula-activity-15f4): Yes, this article kind of fits in both categories. It was something I was doing for the first time and I didn’t find all the information in the same place.
- [A Tale of Saving a Windows Installation with Linux](https://www.stephanmiller.com/Saving-A-Dead-Windows-Installation-With-Linux/): Ditto for this one.
And when you take coding tests for interview and you have to do something new, that could be an article too. Plus, after you remove anything from the code you wrote that you need to, you already have code examples for the article. I am planning an article on RTK Query because of the code I wrote for one coding test.
### Write an Article to Store Your Own Notes
I started a blog around 20 years ago because everyone else was and I didn’t want to miss out. But then I couldn’t think of anything to write about. So I started writing about things I did that I might have to do again. I also thought that other people might find the information useful.
So I wrote articles when:
- [My WordPress installation got hacked](https://www.stephanmiller.com/how-my-blog-got-hacked/)
- [I was messing with servers](https://www.stephanmiller.com/category/servers/)
- [My WordPress installation got hacked again](https://www.stephanmiller.com/sites-and-cpanel-hacked-with-prevedvsem123-cn-virus/)
- [When I found someone using images directly from our website](https://www.stephanmiller.com/hot-linked-image-marketing-ecommerce-optimization-series/)
- [My Wordpress installation got hacked again](https://www.stephanmiller.com/getting-yourself-unhacked-by-mr-exe-destroyer-or-dark-master/)
- [I was trying things with affiliate marketing](https://www.stephanmiller.com/converting-the-clickbank-marketplace-xml-to-csv/)
- [When I was working in e-commerce](https://www.stephanmiller.com/attacking-and-defending-your-e-commerce-niche/)
- [And hacked again](https://www.stephanmiller.com/htaccess-hack-finder-script-for-vlag-nerto-ru/) (See a pattern. It’s why I use Jekyll now and haven’t had a problem since.)
- [When I was messing with Google search results](https://www.stephanmiller.com/the-jamaican-challenge/)
And eventually after I wrote enough articles about screwing with Google and other things SEO, I got some guest posting opportunities:
- [Stephan Miller on Search Engine People](https://www.searchenginepeople.com/blog/author/stephanmiller)
- [Stephan Miller on Search Engine Journal](https://www.searchenginejournal.com/author/stephan-miller/)
Shortly after that, I:
- [Was interviewed](https://www.searchenginepeople.com/blog/ruud-questions-stephan-miller.html)
- Was asked to speak at [a regional conference](https://www.stephanmiller.com/highlight-midwest-and-unconferences/)
- Got [a book deal](https://www.amazon.com/Piwik-Analytics-Essentials-Stephan-Miller/dp/1849518483) because of [an article I wrote](https://www.searchenginepeople.com/blog/piwik-web-analytics-for-wordpress-3-multisite-single.html)
So, just taking notes on what you do can go a long way.
## Tools That Can Help With Writing Technical Articles
Along the way, I’ve picked up a few tools that help me get this done.
### tabXpert
This [Chrome extension](https://tabxpert.com/) let’s me save a whole Chrome session instead of a bunch of bookmarks. That way if an article takes a little more research, I’ll just save it as a session and continue where I left off later.

### Github Gists
I have to admit I haven’t create any new [Gihub Gists](https://gist.github.com/eristoddle) in a while, but they can be a useful way to save snippets of code to use in an article. After you create a gist, you can copy the code to embed it your article by clicking a button on the gist page:

I stopped using gists in the last few years for this because I just embed my code in markdown now and use css to create the syntax highlighting.
### Online IDEs
A gist will give your uses a nice formatted and syntax highlighted version of your code, but an online IDE will give your readers the ability to execute the code. I don’t use these all the time, but they can be especially useful for frontend code you want to show off. Here are some common ones:
- [JSFiddle](https://jsfiddle.net/): The classic JavaScript code playground.
- [CodePen](https://codepen.io/): More features for larger projects.
- [CodeSandbox](https://codesandbox.io/): At the time I first used it, it was the only one that would run React Native projects. It will also run some other languages like PHP, Python, Rust, and Go.
There are also online environments to execute code other than JavaScript like [ideone](https://ideone.com/) and [CodeChef](https://www.codechef.com/ide), but I haven’t used them yet for backend languages. There is even [SQL Fiddle](https://sqlfiddle.com/) for online SQL development environments.
### DBeaver (for SQL)
I had to mention this one because I ran into a feature in [DBeaver](https://dbeaver.io/) that made SQL-based articles 1000% easier to write if you use markdown. Just select and right click on any table layout in the app, choose `Advanced Copy` and you can copy what you selected as markdown. If you ever tried to create a table in markdown, you know how much time this can save.
So when you write a query and want to give an example of the results, you can paste them right into the article.

### A Quick Screenshot Tool
I do get that most operating systems have a way of creating a screenshot and maybe they have gotten better, but I just haven’t taken my chances lately. I used to use [Skitch](https://help.evernote.com/hc/en-us/articles/208431248-Evernote-Skitch-Quick-Start-Guide) along with Evernote, but it is no longer being updated and got really buggy on Macs with M processors. Also I recently stopped using Evernote. So I use [Shottr](https://shottr.cc/) now. I can’t really say its the best, because it was the first one I tried after reviewing a few options. But I haven’t looked for a replacement yet so it works for me.
The handy thing about Skitch in the past was that it would store screenshots in Evernote. It was also unhandy at the same time because once you set that up, it stores all your screenshots in Evernote. But this would allow me to start taking screenshots once I knew that I had an idea and come back later for them. Now I create them and paste them in a Joplin note for later use.
### Joplin (or Evernote or Something Like It)
Sometimes I don’t get started on the article right away and may do research here and there before I write it. A note taking app with a web clipper and mobile apps makes it easy to do research whenever I can. I don’t really have much of a process for this except clipping the whole page, maybe tagging it with something I’ll remember, and then remembering I clipped the notes when I’m writing the article.
### Google Search History
This is especially useful if you are writing an article about a topic because it took you many open tabs and Google searches to figure it out. Just go back through your search history to retrace the path you took. Here you will find keywords, wrong directions you took, etc. And if something fits, add it to your article. TabXpert also saves a history of your last 100 closed tabs per session, so that helps too.
An example is when I discovered the terms “programmatic SEO”. I was doing it as early as 2004 and I just discovered the term this year. I was looking for more techniques on doing what I used to do. I think I started with terms like “data site generation”, “generate SEO site with code”, etc. I ended up using dozens of terms before I ran into “programmatic SEO”. And I am guessing that other people would be in the same boat as I was at some point. But now that I know the jargon, I gave up on my search. But the phrases I used before I found it could be gold.
## Editing Your Technical Articles Before You Publish
My old method of writing a blog post was to come up with an idea, write it from start to finish in one sitting, hit publish, and be done. If I noticed a typo, I would fix it. Most of the time I would notice these after I published and I would have to edit it and republish a couple of times.
Now I do things differently, though some things do still slip through and sometimes I skip some of the steps.
### What Did You Miss in Your Article?
Now, you may have the best idea for a new article, but I guarantee that there will be others out there hitting the same topic from different angles. And you want to find them to see if you missed anything. You aren’t going to add everything you find and you aren’t even going to think of this in terms of beating the competition.
Instead, you want to make sure you covered everything about the topic from your own angle that you need to to make it complete. I did this heavily on my [article on how to use reduce in JavaScript](https://dev.to/eristoddle/javascript-reduce-a-complete-guide-to-the-only-js-array-function-you-really-need-334a). Initially it was just going to briefly tell how it works and then for examples, show how to replace map, filter, and find with it.
But then I searched for articles that would be similar to mine, and from reading them, remembered when I had searched about JavaScript reduce before, I searched for specific things like:
- How to flatten an array of arrays
- How to use reduce with a JavaScript object
- How to convert CSV to JSON
- How to use reduce with async/await
And more, so the article grew to about three times its original size. Another option would be to break the article up into multiple articles and interlink them.
You can also check Stack Overflow and Reddit for popular questions related to what you are covering.
And, in the middle of writing this article, I realized that this and other articles I have written recently really need a table of contents, because people could land on the article from a variety of search terms and the table of contents would guide them to the section they are looking for. Well, let me tell you. That change on 4-5 articles on my site tripled traffic in a couple of months.
### What Are People Looking For?
This is where you stop thinking about how brilliant your idea is and start thinking about how it will help people and how they would find your article.
- Does the title describe the content of your article and make it stand out?
- Do you have headings that describe each section for the people who visit from Google and just want to read part of your article? While I will hit Ctrl-F and search the page when I am looking for something specific, not everyone will and headings with help them find the section they are looking for.
- Does the excerpt and meta description describe the article well and make people want to read it?
These are all doorways that lead people into reading more of your article or clicking the link to it in the first place.
### Does the Article Sound Right?
Sometimes you read what you expect and not what you see, especially when you are reading your own words. Before you publish you should read your article, to make sure it makes sense and sound right. But I have found that using a tool like [Natural Reader](https://www.naturalreaders.com/index.html) that will read for me is much more effective, because I am not the one reading it. I just split my screen with my draft on one side and the reader on the other, so I can pause it when I need to change something.
Something I’ve started to do off and on is rewrite the introduction and conclusion by default. The introduction can be the difference between a visitor that continues to read your article and a bounce. And the conclusion might make them want to read another article you’ve written.
### Does it Play Well with Google?
This is the part where you tweak words and phrases you use in all parts of your article slightly. Here are some tools to help with that:
- [AnswerThePublic](https://answerthepublic.com/)
- [Google Ads Keyword Planner](https://ads.google.com/aw/keywordplanner/home)
- [Wordtracker](https://www.wordtracker.com/search)
- [Ubersuggest](https://app.neilpatel.com/en/ubersuggest/keyword_ideas)
- [SemRush](https://www.semrush.com/analytics/keywordoverview/?db=us)
- [WordStream](https://tools.wordstream.com/fkt)
- [Keyword Tool](https://keywordtool.io/)
And I while I might sometime in the future, I haven’t had to pay for any of these yet to get what I need. After all, I am going to write the article based on my idea, not use the tools to come up with ideas for new articles. I am just checking if some of the keywords I used could be replaced with other, more popular ones.
## My Technical Article Backup Plan
Most of the places I get paid to write for either have a list of topics they want to cover or a partially fleshed out idea that I can either accept or reject. Let’s just say that most of what I write for pay is not what I would choose to write, but I do it because of, well, I get paid. And I have been paid for over 1000 articles in the last four years or so.
I think because of AI, this has slowed down a bit. Though, get this, I did pick up a side gig editing technical articles written by AI to make sure the terminology used is the terminology that those in the field would use. And I have actually been paid more doing that than what I would writing the whole article from scratch by most places.
So when I am not writing for pay, I am writing articles like this. And I haven’t yet cold-pitched an article. I am kind of scared to. But I think what I am going to do is continue to use this technique of coming up with ideas when I have time, write the article, find a website where it might fit, and pitch it there. If no one wants it, I will publish it myself.
I take that back. I did pitch [How to Store JSON in PostgreSQL](https://dev.to/eristoddle/how-to-store-json-in-postgresql-2eb0) to Linode. I wrote it. I submitted to their git repo with a pull request. And I waited a year and watched it sit there. So, I deleted my pull request and published it here. It’s kind of what gave me the idea for this backup plan.
It is also why I am scared to pitch another article. 95% of these places, in my experience, will just leave you hanging. I get everyone is busy, but come on. Don’t list your site as accepting articles and then ignore everyone. And here is my shoutout to [CloudCannon](https://cloudcannon.com/) for being one that didn’t even if they only told me they weren’t accepting any new articles at the time. Most have ended up on this list:

But what happened in the middle of writing this article, which I started writing in the beginning of March 2024?
- I got contacted my two companies on LinkedIn and now write for them.
- Writing picked up with the places I already wrote for.
- I did take two whole days off, Memorial Day and the Fourth of July, but I wrote the rest of the time.
But this still will be my backup plan.
## Promoting Your Technical Article
I have to tell the truth about this. I really have to work on this part. Up to now, here’s what I’ve been doing with recent articles I’ve written here:
- Publish it here
- Republish it on Medium
- If it is about coding, republish it on Dev.to
- Post about it on:
- Twitter
- LinkedIn
- Facebook
- Reddit
And that is about it. You know which brings the most traffic: Reddit. Sometimes Medium gets close but that traffic mainly stays there. I can’t say I’m an expert about Reddit, but I can say that it doesn’t matter if Redditors like you or hate you. You only have a problem when they don’t give a shit.
[My post about the JavaScript reduce article](https://www.reddit.com/r/javascript/comments/1943ex0/javascript_reduce_a_complete_guide_to_the_only_js/) on the r/javascript subreddit looks like it bombed and I kind of expected it to turn out this way. The title was kind of linkbaity and you had to read the article to see I wasn’t actually suggesting to use reduce any time you touch a JavaScript array. A lot of people I knew didn’t understand it and I thought suggesting it could be used for everything would make people look into it.
But…it got three time the page views of [my post about replacing Evernote with Joplin](https://www.reddit.com/r/joplinapp/comments/1adx9y1/how_i_finally_ditched_evernote_for_joplin/), which got 22 upvotes compared to 0. So yes, if you give developers on Reddit something to nitpick or a controversial development topic, you might actually get more traffic than if you write something not prone to attack.
The [post about Jekyll search](https://dev.to/eristoddle/how-to-add-search-to-your-static-site-generator-jekyll-hugo-gatsby-nikola-etc-23pd) had limited traction, but it is a pretty niche topic. I used to be scared to post articles on Reddit fearing they might get ripped apart. Now I think that might be the easiest route to traffic from Reddit because trying to write something everyone will love usually results in something so bland, no one cares.
Like I said, I need to work on promotion. Some things I am thinking of doing include:
- If I mention a library, tool, or software in a post, contact people involved with it about my article. For example, if I hadn’t already been writing [an article for LogRocket about RedwoodJS,](https://blog.logrocket.com/leveraging-react-server-components-redwoodjs/) I could have mentioned on their community site, like [this](https://community.redwoodjs.com/t/building-a-meme-generator-with-redwoodjs/1568). And I could have done this with the Joplin, Truffle, and Jekyll articles.
- Find more places like Dev.to and Medium where my articles can be republished or promoted like:
- [Hackernoon](https://hackernoon.com/)
- [Hashnode](https://hashnode.com/)
- [Substack](https://substack.com/)
- [Hacker News](https://news.ycombinator.com/) ( I actually have tried to submit a few articles here before and get errors when I do)
- [DZone](https://dzone.com/)
- Find places where people are having issues that my articles address, solve their problem and link to my article for more details.
## Conclusion
So this article took three months to write, because I got busy. It was meant to help me systemize writing technical articles, so when I saw a gap in paid writing assignments, I could jump right into another article. But no gaps came in all that time, so I guess that’s a good thing?
What I did do is test my theory. At the end of each work day, I thought, “What happened or what did I work on today that I could write about?” and there was always something and most of the time, half a dozen things. I never got to those ideas, but that’s a good problem to have.
I also haven’t tested my new plan for pitching technical articles or trying new promotion techniques, but once I get the time to write new articles in the first place, I’ll let you know how it goes. | eristoddle |
1,915,885 | 🌟 Embracing Cloud-Native Applications in the IT Industry ☁️🚀 | As the IT industry evolves, so does the approach to application development. Cloud-native... | 0 | 2024-07-08T14:22:20 | https://dev.to/m_hussain/embracing-cloud-native-applications-in-the-it-industry-2mpi | cloudnative, devops, webdev, programming | As the IT industry evolves, so does the approach to application development. Cloud-native applications are paving the way for agility, scalability, and rapid innovation. Here’s why they're crucial:
Agility: Deploy updates faster and more frequently, responding quickly to market demands and customer feedback.
Scalability: Scale resources automatically to handle varying workloads, ensuring consistent performance without downtime.
Resilience: Built-in redundancy and failover mechanisms enhance reliability, keeping applications available even during disruptions.
Cost Efficiency: Optimize resource utilization and reduce operational costs by leveraging cloud-native architectures.
🔍 Pro Tip: Embrace microservices, containerization (like Docker), and orchestration tools (such as Kubernetes) to maximize the benefits of cloud-native development.
#CloudNative #ITIndustry #Innovation #Agility #Scalability #Kubernetes #Microservices #TechTrends | m_hussain |
1,915,886 | sample post | this is a sample post wrote on the python session | 0 | 2024-07-08T14:24:19 | https://dev.to/tshrinivasan/sample-post-143k | test, sample | this is a sample post wrote on the python session | tshrinivasan |
1,915,887 | 17 Free Open-source Icon Libraries (Carefully Curated List, Filterable & Sortable) - Need Feedback | Hi guys, I want to share my latest work on how i serve a resource list content on my blog... | 0 | 2024-07-08T14:25:58 | https://dev.to/syakirurahman/17-free-open-source-icon-libraries-carefully-curated-list-filterable-sortable-need-feedback-p66 | webdev, beginners, design, opensource | Hi guys,
I want to share my latest work on how i serve a resource list content on my blog [https://devaradise.com/icon-libraries/](https://devaradise.com/icon-libraries/)
So, the idea was to create a user friendly page dedicated to listicle content, where users can filter and sort based on any attribute related to the resources.
I need a feedback on the UX. Please let me know what you think :smile:
As for the list, So far i curated 17 free open source icon libraries. This is sorted based on the Github Stars. You can sort by number of icons in the original page
1. [FontAwesome - Internet's icon library and toolkits](https://fontawesome.com/)
2. [Google Material Symbols & Icons](https://fonts.google.com/icons)
3. [Feather - Simply beautiful opensource icons](https://feathericons.com/)
4. [Heroicons - Beautiful hand-crafted SVG icons](heroicons.com)
5. [Ionicons - Premium Hand-crafted Icons](https://ionic.io/ionicons)
6. [CSS.gg - Pure CSS Icons](https://css.gg/)
7. [Lucide - Beautiful & consistent icons](https://lucide.dev/)
8. [Eva - Beautifully Crafted Opensource Icons](https://akveo.github.io/eva-icons/#/)
9. [Octicons - Scalable icons handcrafted by GitHub](https://primer.style/foundations/icons)
10. [Bootstrap Icons](https://icons.getbootstrap.com/)
11. [Remix - Simply Delightful Icon System](https://remixicon.com/)
12. [Evil Icons - Simple and clean SVG icon pack](https://evil-icons.io/)
13. [Phosphor - Flexible icon family](https://phosphoricons.com/)
14. [Boxicons - High Quality Web Icons](https://boxicons.com/)
15. [Hugeicons - Beautiful Icons](https://hugeicons.com/)
16. [Akar Icons - A perfectly rounded icon library](https://akaricons.com/)
17. [Flowbite Icons - SVG icons built for Flowbite](https://flowbite.com/icons/)
This list will keep growing as i find new icon libraries. So, if you know another icon libraries worth mentioning, please let me know in the comment below!
Thank you. Good day! | syakirurahman |
1,915,970 | Python first program - Getting started with Python | Python download link : https://www.python.org/downloads/ Install the latest stable version of... | 0 | 2024-07-08T14:29:27 | https://dev.to/kannansubramanian/python-first-program-getting-started-with-python-2i07 | python, programming | Python download link : https://www.python.org/downloads/
1. Install the latest stable version of python.
2. Install IDE - Visual studio code
3. Create Python file (FirstCode.py)
4. write your first code
```
print("Kannan Subramanian")
```
Press F5 / Play bytton to run the code

Happy coding.....
| kannansubramanian |
1,915,971 | Apirone update | Apirone is to send callbacks in five blocks in the Tron blockchain, not every block as in all other... | 0 | 2024-07-08T14:31:17 | https://dev.to/apirone_com/apirone-update-2lk9 | Apirone is to send callbacks in five blocks in the Tron blockchain, not every block as in all other blockchains supported.

Doing it every block the callbacks can be received up to 30 times a minute. In this case, there is a probability that small PHP hostings can fail to manage requests, and it will influence an online store operation or stop the script timeout implementation. | apirone_com | |
1,915,972 | தொடக்கம்... | இன்று python இணைய வகுப்பில் இணைந்தேன். windows கணிணியில் python-3.12.4 நிறுவிய பிறகு IDLE திறந்தேன்.... | 0 | 2024-07-08T14:32:28 | https://dev.to/mahaloghu/tottkkm-1630 | beginners, help, idle | இன்று python இணைய வகுப்பில் இணைந்தேன். windows கணிணியில் python-3.12.4 நிறுவிய பிறகு IDLE திறந்தேன். அதில்
 படத்தில் உள்ளவாறு error செய்தி வருகிறது. இதை எப்படி அணுகுவது? | mahaloghu |
1,915,973 | Isolation of git_sync and task (uses scripts from codebase) running in a DAG | Isolation of git_sync and task (uses... | 0 | 2024-07-08T14:36:29 | https://dev.to/anantha_lakshmimeruva_51/isolation-of-gitsync-and-task-uses-scripts-from-codebase-running-in-a-dag-2g39 | {% stackoverflow 78721362 %}
https://stackoverflow.com/q/78721362/25753693 | anantha_lakshmimeruva_51 | |
1,915,974 | Path to Python | Attended the class conducted by Kaniyam Foundation. Nice and simple speech. clear path forward. I... | 0 | 2024-07-08T14:42:16 | https://dev.to/mani_prabhu_m/path-to-python-1b1o | beginners, python, firstpost | Attended the class conducted by Kaniyam Foundation.
Nice and simple speech. clear path forward. I have attempted to learn python many times. but no progress after learning all data types, conditions etc.
Hope this time the journey will continue.
MK | mani_prabhu_m |
1,915,975 | first day of python | Am from medical background. totally new for computer programming. knows somethings today from... | 0 | 2024-07-08T14:42:41 | https://dev.to/ganesh_balaraman_6edae0d9/first-day-of-python-3hm4 | Am from medical background. totally new for computer programming. knows somethings today from kaniyam.com group, i.e, what is python, its some of applications, and i tried to do print also. it works. lovely. | ganesh_balaraman_6edae0d9 | |
1,915,976 | How to Summarize YouTube Videos Using Gemini, ChatGPT, Claude, and Perplexity in 2024 | In this article, I want to showcase AI tools for creating summaries from YouTube videos. These AI... | 0 | 2024-07-08T14:43:19 | https://dev.to/proflead/how-to-summarize-youtube-videos-using-gemini-chatgpt-claude-and-perplexity-in-2024-1732 | ai, chatgpt, gemini, howto | In this article, I want to showcase AI tools for creating summaries from YouTube videos. These AI tools can quickly summarize a video’s content so you don’t have to watch the entire thing. I’ll demonstrate how to use these AIs to rapidly extract the main points from videos.
## AI Tools for Video Summarization
Video summaries can save time, help you grasp key points quickly, and allow you to decide if watching the full video is worthwhile.
I will use these four popular AI tools:
- ChatGPT
- Claude
- Google Gemini
- Perplexity AI
## How to Use Each Tool
### How to summarize a video with ChatGPT
ChatGPT now has internet access, making it easier to summarize videos directly.
- Open the website https://chatgpt.com/ and register there.
- Copy the video link from YouTube.
- Ask ChatGPT to “Summarize the video” along with the link.
- If needed, ask for additional information to get a more comprehensive summary.
It usually works immediately out of the box, but sometimes, if the video is new, it could show an error. In this case, please follow the steps to get the summary from the transcript.

### How to summarize a video with Claude
Create an account on the https://claude.ai/ website to get access to AI.
Claude doesn’t have direct internet access, so you cannot just pass the link to the video; you need to pass a transcript.

### How to Get a Transcript from YouTube Video
- Open the video on YouTube.
- Click on the “Show Transcript” button below the video description.

- Copy the entire transcript.

- Paste the transcript into Claude and ask it to summarize it.

### How to summarize a video with Google Gemini
If you haven’t registered yet, go to https://gemini.google.com/ and register there. Gemini performs very well with video summarization without needing extra steps. It also works well with new videos.
- Copy the video link
- Ask Gemini to summarize the video and provide the link.
- Gemini will generate a detailed summary without extra steps.

### How to summarize a video with Perplexity AI
Visit https://www.perplexity.ai and register there. Perplexity offers the most detailed summaries but doesn’t work with newly added videos.
- Copy the video link
- Ask Perplexity to summarize the video and provide the link
- Perplexity will generate a comprehensive summary

## Conclusion
Based on the comparison in the video, I’ll rank these tools in the following order:
- Perplexity
- Google Gemini
- ChatGPT
- Claude
Each tool has its strengths, and the best choice may depend on your specific needs and the video content you’re summarizing.
## How to summarize a video - YouTube Tutorial
{% embed https://youtu.be/LjtI64uEU8w?si=xLg5EXXApRTsEkRQ %}
[How to summarize a YouTube video with AI in 2024](https://youtu.be/LjtI64uEU8w?si=xLg5EXXApRTsEkRQ)
Try these tools and see which works best for you! Don’t forget to share your favorite in the comments!
Cheers!
| proflead |
1,915,977 | 08-07-2024 | கடந்த பதிவில் ஏற்பட்ட பிழை K7 security ஐ disable செய்த பிறகு வரவில்லை. IDLE திறந்து கொண்டது.... | 0 | 2024-07-08T14:43:57 | https://dev.to/mahaloghu/08-07-2024-18gn | print, post, help | கடந்த பதிவில் ஏற்பட்ட பிழை K7 security ஐ disable செய்த பிறகு வரவில்லை. IDLE திறந்து கொண்டது.
என்னுடைய சந்தேகம் K7 security ஐ disable செய்யாமல் திறக்க என்ன செய்ய வேண்டும்? | mahaloghu |
1,915,978 | பைதானில் முதல் பாடம் | 8/7/2024,சென்னை பைதான் வகுப்பு இன்று ஆரம்பம். அறிமுக உரை கேட்டேன். Google Colab ல் print command... | 0 | 2024-07-08T14:44:22 | https://dev.to/rjagathe/mutl-paattm-1m62 | 8/7/2024,சென்னை
பைதான் வகுப்பு இன்று ஆரம்பம்.
அறிமுக உரை கேட்டேன்.
Google Colab ல் print command கொடுத்து சோதனை செய்தேன்.
பைதான் கணினியில் நிறுவப்பட்டது.
IDE ஆக PyCharm யினை install செய்துள்ளேன். வகுப்பில் ஆசிரியர் VS பயன்படுத்துவதாக கூறினார்.எனவே, PyCharm எந்த அளவிற்கு உபயோகப்படும் எனத் தெரியவில்லை....பார்ப்போம். | rjagathe | |
1,915,979 | No Code Test Automation A Complete Guide | No code this word has become popular in the last few years, from no-code software development to... | 0 | 2024-07-08T14:45:25 | https://dev.to/jamescantor38/no-code-test-automation-a-complete-guide-4a76 | nocodetestautomation, testgrid | No code this word has become popular in the last few years, from no-code software development to no-code website development; every business is exploring this no-code technology, and software or website is no more different.
Today many software teams have adopted no-code test automation, and many businesses are shifting from conventional, time-consuming manual to no-code or low-code automation testing. Wait…
You have not yet shifted to no code testing; still, you have a team of manual testers, man o man you are wasting a massive amount of your time and money, but still, there is scope for improvement.
Welcome to this article; in this article, we have discussed in detail no-code automation testing, why you should go for no-code automation testing tools, and much more, so read the whole article and meet you at the conclusion.
## What Is No Code/Low Code Automation Testing?
No-Code or low-code automation is an innovative approach to developing automation tests that enables you to test an application without writing a single line of code or script.
The goal is to simplify the setup so that automating a test scenario takes less time and almost no coding. For many tasks, especially in IT automation, the less code there is, the easier it is.
Teams can use codeless automation to automate the process of writing test scripts regardless of skill level.
## How Do Low-Code/No-Code Approaches Help Organizations Deliver Software As Efficiently?
**01 More Accuracy**
No-code Test automation reduces the chances of error in code because it requires less human intervention.
The problem is that a human tester can make mistakes at any stage of the evaluation process. However, the machine will not. Because generated test cases are more precise than human testers, eliminating human errors reduces the risk of failure.
Working with an AI + no-code automation feature in tools like TestGrid can help you unlock the full potential of test automation accuracy. It’s an artificial intelligence-powered solution that can outperform humans in software testing.
**02 Cost-Effective**
[Manual testing is time-consuming](https://testgrid.io/blog/5-unavoidable-problems-you-will-face-in-manual-testing/) and tedious for the execution of repetitive tests. The cost of manually testing your application grows over time.
In contrast, automated testing is less expensive in the long run because once created, test scripts can be reused 24/7 at no additional cost.
True, the initial cost of automation adoption may be high, but it will quickly pay off.
The greater the number of automated test cases generated and used, the greater the return on investment.
With no code automation tools, you get tons of features like reuse testing, AI automation, cloud, parallel testing, and tons more features that enhance the speed of software deployment and thus ensure better ROI.
**03 Increased Productivity**
Because no-code automated tests do not require human intervention while running, you can test your app at night and collect the results the following day.
Software developers and QAs can spend less time on testing because automated tests can repeatedly run on their own.
Essentially, automation allows your engineers to focus on critical tasks, which will help make software better by adding more new features, thus increasing revenue for the company.
**04 Quick Feedback**
Another advantage of automated testing is immediate feedback. With fast test execution, users receive testing reports instantly, allowing them to respond quickly if a failure occurs.
When your application is already on the market, immediate feedback is especially useful. However, manual testing will only slow down the process if you need to fix some bugs quickly.
On the other hand, with no-code test automation tools, you can make quick changes to your application. As a result, no code automation testing improves team responsiveness, user experience, and customer satisfaction.
Also, with no code automation tools like TestGrid, you can get very detailed and instant testing reports, which helps you understand and fix the bugs faster.
**05 High-Quality Great Performance**
Because of the extensive test coverage, automated testing will ensure your app’s high quality and performance.
It enables you to run thousands of automated test cases concurrently, allowing you to quickly test your app across multiple platforms and devices.
And with tools like TestGrid, you get cloud infrastructure that maximizes test parallelism and concurrency. Cloud infrastructure helps cover all the required OS and hardware configurations.
Furthermore, automated testing allows you to quickly create a large number of test cases, including complex and lengthy ones. Of course, you will never be able to do this if you choose to test your app manually. And with fast-moving technology, it’s a must.
**06 No Technical Obligations**
Even if you are not a software tester, you can test software efficiently by using a no-code automation tool.
Thus anyone with no coding knowledge can run the test. This saves the company time and money simultaneously, which can be directed to more ROI work.
Also, as you need to provide regular software updates, it is difficult to assign a team for testing every time, both in the aspect of money and time. It’s not reliable, but with no code automation testing tools like TestGrid, you can test updates quickly and release updates early in the market.
**07 Record Time To Market**
The time to market decreases when a cycle is efficient. A product can reach its target audience in record time with the efficiency and agility of low-code and no-code automation.
## Benefits of Low-Code/No-Code Approaches
**01 Rapidly Construct Modular Tests That Require Minimal Maintenance**
No code Automated Testing is a comprehensive approach to automating tests. No code test suites are simple to maintain, which is essential because the maintenance phase is essential to the product development lifecycle.
Furthermore, you may have noticed that scripts are written for automated testing frequently fail during the maintenance phase. This is simply because the framework is not designed with reusability or traceability in mind. On the other hand, no code Automated testing includes traceability of all reusable components.
Carefully constructed no-code automation technologies maintain the traceability of all reusable components, increasing the possibility of object-oriented test automation.
When the product under test changes, changes to the test suites are also simple to incorporate in no code testing, improving agility and reaction times.
**02 Reuse Core Test Components Across Different Types of Testing**
Most no-code testing tools allow you to reuse test steps across projects. However, testers no longer need to manually integrate new changes or update their test cases because this is now done automatically.
Instead, no code tools prioritize reusability and simplify maintenance by leveraging machine learning and AI technologies for managing, healing, and maintaining application objects and elements in test suites.
The majority of codeless testing tools support a wide range of application types, such as web, desktop, or virtual applications. Increased test coverage means the QA team has more bandwidth to apply test automation on more platforms simultaneously in a much shorter time.
**03 Painlessly Update The Test Suite As Application Requirements Evolve**
No code/low-code automation tools make it simple to automate large test suites and accelerate product/service delivery. Furthermore, functional testers can benefit from the user-friendly interface to efficiently create test scripts.
The faster your automated testing is completed, the faster you can identify bugs and provide feedback.
In addition, it speeds up the creation and execution of tests, even for the most complex test scripts. These tools enable engineers to save time and concentrate on delivering more innovations to customers.
**04 Engage More Business-Focused Domain Experts In The Testing Process**
Product managers and other non-technical people may be hesitant to offer suggestions while developing an automation framework with coding.
They can, however, freely engage in different automation approaches and strategies in a no-code test automation suite. Expert advice is extremely valuable throughout the development process.
The no-code testing framework can become more robust and reliable with domain experts’ continuous process monitoring and tracking.
## Low-Code/No-Code Approaches With TestGrid
**01 Save Your Time**
Hiring a team of manual testers wastes time and resources. You need to contact them regularly for one or more issues. Nowadays, you need to provide constant software updates. In that case, you also need to contact the manual testers.
However, with automation testing, you only need a tool, and testing can be done with a few clicks. With the TestGrid scriptless automation tool, everything is ready for you, so with just a few clicks, you can test software or a website with TestGrid.
**02 Run Tests Whenever And Wherever**
Fix hours of office which demand me to complete the work in those hours; however, sometimes you feel you do not want to work in those hours and intend to complete the work after office hours.
Well, TestGrid has listened to you with TestGrid cloud infrastructure. Now you can conduct tests any time, anywhere.
Also, this even facilitates outsourcing this work, making the job easy and fast. So you are just one login away from running 100s of tests whenever and wherever with the TestGrid cloud server.
**03 Reuse Without Wasting A Second**
Rewriting the same test cases several times for every new software is one of the time-consuming tasks in testing mobile or web app software. Therefore, we design the automation testing for TestOS in a way that allows the QA teams to save and reuse nearly all of the tests on various versions of the app as well as on other apps.
**04 Reduce Expenses & Gain Profits**
The amount of work your team members must do manually to test the apps will be reduced through automation with TestOS in your toolkit.
As a result, your company can better use its team and devote more time to more productive and profit-generating tasks rather than tedious and time-consuming ones.
**05 Multiple Testings**
TestGrid automation tool provides you with everything you need for software testing. With the same dashboards, environment, and tools, you can perform a wide range of mobile tests, including functional testing, performance testing, operational testing, security testing, and many more… This will allow you to scale your testing business quickly and smoothly.
**06 All-In-One Integrations**
Unlike manual testing, where you may have to pay for testing every time, With TestOS automation testing software, you can test multiple things for multiple devices in multiple ways an infinite number of times.
You can also use TestGrid.io to add existing scripts or program the test environment to your specifications. For example, an exclusive feature in TestGrid.io automation, robotic testing, tests your software for hardware test cases and IoT testing.
**07 Real Device Testing**
TestOS’s REAL device cloud will enable your team or you as an individual tester to test across 100+ real devices and browsers, including Android and iOS devices.
Teams can either perform manual testing on the appropriate device or run automated parallel tests across multiple devices.
**08 Security**
As security is a top priority for any application or website, TestGrid includes an integrated SAST report. Static application security testing (SAST) examines source code for security flaws that could compromise your application.
So, with an auto-generated SAST report generated with each build execution, you can determine whether an application or website is security vulnerable or not.
**09 Fast Operation**
With built-in data management, ready-to-use infrastructure, CI/CD integration, robotic testing for hardware and IoT testing, and detailed reporting, test grid automation makes testing simple for your organization.
**10 All-In-One **
Testing becomes tedious with the increasing complexity of the website and mobile applications. As a result, organizations rely heavily on testers to perform such complex testing, and testing on multiple browsers necessitates using numerous tools. However, with TestGrid automation tools, we can automate the entire process with a single tool.
We have integrated different tools and provide solutions like :
Performance testing – TestOS showed performance metrics without additional effort and identified crucial memory leaks
Integrated Metrics -Identified bugs early by using Sonarqube+Bamboo+TestOS
Browser Cloud-Enabled offshore model for the client with the use of remote in browsers
Automation – Recreated client’s Selenium test cases in TestOS with 60% less effort.
All these solutions in TestGrid enhance scalability, efficiency, and reusability, reduce overall QA cost by 45% and reduce test case maintenance by 70%.
**11 Easy To Understand and Resolve**
Manual testing necessitates a detailed understanding of bugs and ways to solve them, but the goal of TestGrid is that anyone with decent knowledge should be able to resolve bugs, which we have accomplished. With the test grid test automation tool, we instruct bugs in simple language, making it simple to fix bugs.
TestGrid is a one-stop shop for all of your automation testing needs. Unfortunately, many tools lag in one way or another, but with proper integration, we have made even testing of complex software and websites simple and less time-consuming.
Test grid automation improves reliability and scalability and saves money and time. No code test automation, built-in test data management, ready-to-use infrastructure, cloud server, and the list will go on as we ensure that we add new features to make it more and more up-to-date.
TestGrid also provides 24/7 support and is genuinely a value-packed tool and one to look for no-code automation.
## Conclusion
The importance and adaption of no-code automation tools have increased tremendously, and it will become a go-to option for the QA team. It’s the must-have in everyone’s tool kit today.
If you still do not use the no-code automation tool, this article might be eye-opening for you. I hope after this article, you will explore this no-code automation testing tool and might go for one.
Source : This blog is originally published at [TestGrid](https://testgrid.io/blog/no-code-test-automation/)
| jamescantor38 |
1,915,980 | Important Software Testing Documentation: SRS, FRS and BRS | Despite how much we all hate writing documentation, it’s one of the essential steps in any industry,... | 0 | 2024-07-08T14:46:42 | https://testfort.com/blog/important-software-testing-documentation-srs-frs-and-brs | qa, testing, softwaredevelopment, webdev | Despite how much we all hate writing documentation, it’s one of the essential steps in any industry, including software testing. Having clear documentation is very important. It’s like having a roadmap. When you know exactly what you’re doing and how things need to be done, there’s less of a risk of your team getting lost in translating the requirements and more chances of ensuring your software product turns out how you want it to.
Documentation is a crucial part of any organization, not just startups, unlike what many companies wrongly believe. Big companies, especially when making significant changes, have just about as strong a need for solid documentation as smaller companies – if not more.
Imagine, all of a sudden, Google decides to overhaul its tech system. What would happen if they did it without having documentation? A catastrophe. With millions of lines of code, complex algorithms, and countless interconnected services, any step aside could spell disaster, causing delays, budget overruns, compatibility issues, bugs, and glitches, and tarnishing Google’s good name and reputation. As you can imagine, it won’t be long before customers become disappointed with how the system works and move on to a more reliable platform.
The same goes for smaller companies. Without clear software testing documentation, they risk building a product that doesn’t quite meet the needs of their target audience and ultimately losing a lot of money. Therefore, maintaining documentation is an absolute necessity to ensure you and your team are headed in the right direction.
In this article, we’re going to dwell upon three main document types most commonly used in the world of software development and testing such as SRS (software requirement specification), FRS (functional requirement specification), and BRS (business requirement specification), including the critical differences between them. This will help you understand the intricacies of QA documentation and navigate it with more confidence and ease. Buckle up, and let’s get started.
#What Is Software Testing Documentation?
Test documentation is written guidance consisting of a series of reports describing a software product’s features and functionality and giving the test and development team a clear understanding of what needs to be built and how it should be tested.
With documentation at hand, QA departments can effectively plan and execute their testing efforts, ensuring they have the right coverage and resources needed to validate the software’s functionalities and requirements. Additionally, testing documentation serves as a reference point, reminding the testing team of the important details that require their attention. This way, they can track what has been done and ensure they stay within their testing strategy.
Testing documentation is typically created at the point in time when the QA team gets involved in the software development process. However, it may undergo changes with version control depending on the peculiarities of the software project. All team members working on the project have access to this documentation, allowing them to stay on the same page.
Now, let’s go over each of the test documents used in software development. We’ll focus on the main difference between FRS and SRS documents and the difference between BRS and FRS in software testing, as well as their advantages.
#What Is an SRS Document?
An SRS or software requirement specification is a document prepared by a team of system analysts to describe the software being developed, the business purpose and functionality of a particular product, and how it performs its core functions.
The SRS in testing serves as the basis of any project, providing a framework that each team member is expected to follow. The SRS is also the basis of the contract with stakeholders (users/clients), which includes all the details about the functionality of the future product and how it should work. Software developers widely use the SRS during product or program development.
Delving into more details, the SRS document typically includes functional and nonfunctional requirements, and use cases. A good SRS document takes into account not only how the software interacts with other software or when it’s embedded in hardware but also potential users and their interactions with the software. It also contains references to tables and diagrams so that the team can get a better picture of all the product-related details.
Overall, the SRS document is one of the most important types of testing documentation that helps team members from different departments stay in tune with the project’s progress and ensure all of its requirements are met. As a result, the team needs less time to develop the software and can minimize the expenses associated with software development.
#How to create a software requirement specification
The SRS is one of the core documents in the process of QA that software developers and testing engineers refer to repeatedly during the course of a project. Below is a breakdown of the key steps needed to create a solid SRS document.
https://testfort.com/wp-content/uploads/2019/11/3-Important-Software-Testing-Documentatio.png
#1. Create an Outline
The first step to creating SRS documentation is to create an outline. This can be something you put together yourself, or you can use an SRS template readily available for download online. In general, this document should include the following information:
- Purpose of the product;
- Target audience;
- Intended use;
- Product scope;
- Key terms and acronyms.
In addition to that, you need to have dedicated sections describing the product’s purpose, user needs, assumptions and dependencies, functional and nonfunctional requirements, and external interface requirements.
#2. Define the purpose of the product
Now you can specify in more detail what the product’s purpose is. Start by deciding who on the team is going to have access to this document and how they’re going to use it. As a rule, SRS documentation is used by testers, software engineers, and PMs, but it can also be used by stakeholders and managers from other departments. Drafting SRS with your target audience in mind will help you minimize the effort needed to adapt this specification later on.
Further down, define the scope of the product and its objectives. This step is particularly important if the SRS is going to be used by team members from other departments, not just IT. Specify the goals you plan to achieve with this product and how these goals align with the long-term objectives of the business.
Next, provide explanations for any terms or acronyms mentioned in the SRS. Describing all the key terms will make it easier for your team to understand documentation. Besides, if you hire new people, it will take them considerably less time to familiarize themselves with the project and its requirements, expediting the process of onboarding.
#3. Describe the software product you want to build
At this step, you need to provide detailed instructions for the product being built. To make it a bit easier, here are a few questions you may want to ask yourself:
- “Why are you thinking of building this product?”
- “Are there any particular problems it solves?”
- “Who’s going to use it?”
- “Is it a completely new product, or are you planning an update of the already created product?”
- “Will this product function independently, or will it need to be integrated into any other third-party apps?”
Answering these questions will help you ensure that you’ve covered every important aspect of your software product and that everyone involved understands its purpose and functionality.
Besides the product’s purpose, it’s essential to determine the various scenarios for the use of the product. Think about how potential users will interact with your software solution and what particular needs it will fulfill for them. For example, when building an eCommerce platform, you need to factor in the needs of sales and marketing departments and end users. In contrast, to build a medical CRM, you also need to take into account patients’ requirements.
Other than that, this section should detail assumptions and dependencies providing the team with a clear roadmap of the project’s direction and potential challenges. This includes any assumptions made about the product’s environment, user behavior, technological constraints, and dependencies on external factors such as third-party services or hardware components. By realistically assessing all of these factors up front, you’ll understand where your product might need to be revised and focus on the areas that matter most.
#4. Specify the product’s functional and nonfunctional requirements
The final step of the SRS document is the most exhaustive but also one of the most important ones. Here you need to dive into details and cover each requirement as fully as possible.
By and large, the SRS includes both functional and nonfunctional requirements. Start by specifying the product’s functional requirements, which outline what the software should do in terms of features, capabilities, and interactions. This may include:
- Specific functionalities;
- User roles and permissions;
- Input and output data formats, and so on.
If your product requires integration with other systems, specify external interface requirements as well. A sub-category of functional requirements, they detail how the software interacts with other components, such as APIs, databases, etc.
Then, describe the nonfunctional requirements, which describe how the software should perform in terms of:
- Performance;
- Reliability;
- Security;
- Usability.
When defining the nonfunctional requirements, make sure to consider factors such as response times, scalability, data integrity, authentication mechanisms, and accessibility standards.
#5. Submit the SRS document for approval
Once all these sections are completed, the SRS document is sent to stakeholders for review and approval. By discussing the key points of the SRS together with project managers, testers, software developers, and team members from other relevant departments, you can eliminate ambiguity and misunderstandings and ensure that any concerns or questions are addressed and resolved.
#Advantages of writing a software requirement specification
As we’ve stated earlier, having the SRS document is essential for companies of all scales that have to deal with specific kinds of requirement specifications. However, there are also a number of benefits, making documenting software requirements a crucial aspect of the software development process. Here they are:
- The SRS helps facilitate the process of estimating the project’s cost, timeline, and risks.
- It provides the team of developers and testers who work on the creation of a software product with a clear roadmap they can stick to.
SRS makes introducing new changes to the product easier as the team has a solid understanding of the project requirements.
- It helps minimize development costs by ensuring that every new enhancement undergoes meticulous assessment before any further step is taken.
- It serves as the foundation for the agreement between the IT team and the client, ensuring that the requirements for the project are fully understood.
- It serves as the foundation for the agreement between the IT team and the client, providing them with a full range of requirements that must be met.
#Common mistakes found in the SRS
No explanation of acronyms and terms. It may seem that your team is well aware of the terms used to describe the project, especially if they all actively took part in reviewing the SRS. But the truth is, things like these can be easily forgotten and, if you recruit new people, misunderstood. Therefore, including a glossary of terms in the SRS is important.
Questions and opinions instead of well-defined requirements. While it can be helpful to ask yourself questions when defining the product’s purpose, it’s the answers that need to be outlined in the SRS.
No measurable quality requirements. Quite often, the requirements leave out important details, such as what equipment needs to be used to measure the speed at which it operates, what speed is considered optimal for the test to pass, and what exactly is meant when they say “operates.” All these little details must be specified in the software requirements.
No requirements. Sometimes the requirements may be completely absent. Here’s an example of what that might look like: “We want an attractive user interface with smooth navigation.” This information lacks detail and will ultimately lead to misunderstandings during the development process.
#What Is a BRS Document?
Now to the question of the BRS meaning in software testing. The BRS stands for a business requirement specification, which aims to show how to satisfy business requirements at a broader level. It’s an important document that describes the core product goals or needs the client is willing to achieve with the software or product, which is usually created at the very beginning of the product’s life cycle. Usually, the final version of the document is reviewed by the client to make sure that all business stakeholders’ expectations are correct.
The BRS includes all the client’s requirements. Generally, it consists of the product’s purpose, users, the overall scope of the project, all listed features and functions, usability, and performance requirements. Tables and diagrams are typically not used in this type of document. As you can decipher from the name, the intended users of the BRS document are primarily upper and middle management, product investors, and business analysts.
#How to create a business requirement specification
BRS in software testing is as important as SRS and FRS, albeit from a different perspective, as it covers the most critical business aspects of a software product. Let’s go over the process of writing a concise BRS document.
1. Identify stakeholders and gather requirements
First things first, identify all the stakeholders involved in the project, including clients, users, managers, and business analytics. When you know the group of stakeholders, you can move on to the next step of gathering detailed requirements, which is best done by meeting them in person or through video conference calls, depending on the project’s business model.
2. Define the purpose and scope
Next, determine the purpose of the software solution and its scope. Unlike SRS, here you need to focus on the problems and needs that the product aims to solve from a business perspective, such as increasing customer engagement, boosting sales, improving communication, etc. At this stage, it can also be a good idea to describe the background of the project so that those new to your team can better grasp the main idea.
3.Document user requirements
Try walking in the shoes of your end users to understand what they may need from the product. Document their roles, responsibilities, and interactions with the software. This will help you understand their expectations and come up with features and functionalities that best fit their needs.
4. Create a realistic timeline for the main project milestones
Another important point to outline in the BRS document is the project timeline. To evaluate the length of the project and set deadlines, business analysts take into account factors such as resource availability, technical complexity, and potential risks. With this info, they can break the project into several key milestones, making it easier for the team to keep track of the project progress.
5. Include a brief cost-benefit analysis
Also, the BRS document should include a brief cost-benefit analysis, allowing stakeholders to estimate the investment required to spend on software product development and the potential returns. This should include expenses required for the project, such as the development team’s salaries, the cost of hardware and tools, software licenses, training, and the benefits the business may gain over time. In addition, it should have a short summary of the cost-benefit ratio that compares the anticipated benefits and expected expenses.
#Advantages of writing a business requirement specification
Now let’s look at the advantages of keeping BRS documented.
- Firstly, the BRS document ensures that the product requirements don’t conflict with the business objectives.
It helps identify potential risks in timeline and costs, reducing project delays and cost overruns.
- The BRS serves as a communication tool, facilitating collaboration between the client and the development and testing teams.
- The BRS lays the foundation for the development process, guiding software engineers in building a digital product that meets business needs and user expectations.
- Finally, it provides a documented record of requirements, changes, and decisions, making progress traceability much easier.
#Typical mistakes in the BRS
- Ambiguity. Ambiguous language or terms used throughout the BRS document can lead to confusion and misinterpretation of requirements.
- Lack of stakeholder involvement. Failure to involve all relevant stakeholders in the requirement-gathering process often leads to conflicting priorities and overlooked or incomplete requirements.
- Overly technical language. Another common mistake is the use of excessively detailed or technical language that can make it difficult for non-tech-savvy stakeholders to understand the requirements.
- Failure to update. It often happens that a team doesn’t update the BRS document when requirements change over time, leading to inaccurate information and inconsistencies in development.
https://testfort.com/wp-content/uploads/2019/11/4-Important-Software-Testing-Documentatio.png
#What Is an FRS Document?
Aside from these two, one of the other most accepted specification documents used in software testing is an FRS. The FRS stands for functional requirement specification – a document that outlines all the functions the software or product must perform. To put it differently, it’s a step-by-step sequence that covers the essential operations required to develop a product and explains the details of how certain software components are expected to behave during user interaction.
The main difference between the FRS and SRS documents is that the FRS does not include use cases. It might contain diagrams and tables, but this isn’t obligatory.
Out of the three, the FRS is the most detailed document. In addition to explaining how the software should function, this document also covers business aspects as well as compliance and security requirements to ensure it complies with SRS and BRS documents. No wonder this type of document is often referred to as the outcome of close collaboration between testers and developers.
In the course of the software development life cycle, the FRS is used by software developers to understand what product is expected to be built, while testers use it as a reference point to determine the different test cases and scenarios in which the product should be tested.
As a rule, the FRS document is created by software testers, developers, project managers, or someone else with in-depth knowledge of the system and specific kinds of requirement specifications.
https://testfort.com/wp-content/uploads/2019/11/7-Important-Software-Testing-Documentatio.png
#How to prepare a functional requirement specification
Along with the BRS and SRS, the FRS is the core document of the software development and testing life cycle, so it’s essential to know how to do it the right way.
1. Detail the project scope
To start off, this document should include the goals, functions, costs, tasks, and time frames of the project. In other words, you should go over each step of the project in detail, providing a comprehensive explanation of what needs to be done and when.
2. Specify risks, assumptions, and limitations
Furthermore, consider the potential risks that may hinder the development of a software product and affect its functional design. By analyzing risks and possible limitations, as well as building assumptions, you’ll have a higher chance of eliminating bottlenecks and developing a software product that will be a success.
3. Describe specific requirements, including system and database attributes
That’s where you need to provide details on how software is expected to function and what problems it’s going to address. Most often, this is done with the help of visual tools, such as sitemaps, wireframes, and screen flows, that help picture the key functionalities of the product and understand their impact on user experience.
4. Include use cases in text or diagram format
Next, provide detailed use cases, demonstrating the product’s functionality from the user’s perspective. This step involves describing various scenarios in which the software will be used, including the actions users take and the system’s responses. Use cases can be presented in text format or through diagrams, such as UML (Unified Modeling Language), to illustrate user interaction and the software components.
5. Provide user stories
Another essential component of the FRS is user stories. Informal descriptions of features or functionalities, user stories allow testers to look at the software product from the perspective of an end user and evaluate it against the functional requirements. For example, a user story for an eCommerce platform might be: “As a registered user, I want to be able to view my purchase history so that I can track my orders and manage returns more efficiently.”
User stories help prioritize features based on their importance to end users and provide a clear understanding of the user’s needs and expectations. They also serve as a basis for defining acceptance criteria, which specify the conditions under which a user story will be considered complete.
6. Specify work breakdown structures or functional decomposition of the software
By this, we mean breaking down the project into smaller, manageable components or tasks to facilitate project planning, scheduling, and resource allocation. Determine any dependencies between tasks or components and organize these tasks into a hierarchical structure based on their objectives and priorities.
7. Create software and design documents and prototypes
Finally, you need to provide software and design documents along with prototypes. These are the most valuable resources of QA documentation, providing product owners and IT specialists with detailed information about the software’s architecture, design, and implementation of the software.
#Advantages of writing a functional requirement specification
Writing a functional requirement specification provides numerous benefits that contribute to the success of a software development project. Let’s look at some of them:
- The FRS eliminates confusion due to misunderstanding about what goals need to be achieved.
- Similarly to SRS, this software testing document helps identify potential obstacles that may hinder the development process, allowing project teams to proactively address challenges and mitigate risks.
- The FRS serves as a guide for development and testing, providing the client with confidence that the software product will be built to the specified requirements as intended.
- This document helps prioritize features based on their importance to end users, allowing the software development team to work more efficiently and deliver the most valuable functionalities first.
#Mistakes common for the FRS
- Incomplete scope of work. Failing to fully define the scope of work can lead to missed functionalities and gaps in the final product.
- Failure to satisfy all the requirements. Inadequate analysis or oversight may result in certain product requirements not being addressed correctly.
- Low level of user involvement. If users aren’t actively involved in defining the requirements, the odds are high that the final product won’t meet your target audience’s expectations.
Ignoring security requirements. Not including compliance requirements in the FRS can lead to security vulnerabilities or non-compliance issues, posing risks to the business and even leading to lawsuits.
https://testfort.com/wp-content/uploads/2019/11/8-Important-Software-Testing-Documentatio.png
#Who Should Write Testing Documentation for Software Projects?
As we briefly touched on a bit earlier, software testing documentation can be written by several members of the team. In fact, sometimes, the more people involved, the better, as you can address more issues and cover a wider range of requirements, eliminating roadblocks down the road.
However, the key point here is that the instructions must be written by competent people. Whether you decide to use project managers, technical writers, testers, or developers, it’s critical that they have domain expertise and experience writing relevant documentation. Otherwise, mistakes are inevitable.
Based on our own experience and the experience of other software companies, writing an SRS is usually assigned to technical writers, software engineers, or system architects, although depending on the specifics of the software, it can also be written by a business analyst.
Business analysts are responsible for the BRS document. They create a draft of business requirements and review it with project managers or product owners to ensure alignment with business goals and objectives. Once finalized, the BRS is reviewed by other key figures of the team from various departments to ensure full coverage of business needs.
As for the FRS, it’s usually a joint effort between testers and software engineers. Testers typically define functional requirements based on user scenarios and interactions, and developers complement them by adding information about the technical feasibility of implementing these requirements.
Overall, the responsibility for writing testing documentation should rest with people who have a deep understanding of both the technical aspects of the software and the business goals and can use language that makes that documentation easily accessible to all parties involved. For this reason, both startups and large enterprises working in software development environments should seek assistance with QA documentation.
TestFort can help you with that. Our team has extensive experience in writing various types of testing documentation, including SRS, BRS, and FRS. We understand the importance of clear documentation to ensure the success of software projects and work collaboratively to create documentation that accurately reflects the requirements, goals, and specifications of your specific software project. Whether you need assistance with technical writing or help developing testing strategies, checklists, or user guides, TestFort has the expertise you need.
#Bottom Line
Building a successful software product or service requires a rigorous testing process and detailed documentation. Therefore, whether you test software products using your in-house resources or collaborate with technical partners, FRS, SRS, and BRS should become a regular part of your quality assurance routine. By creating comprehensive software testing documentation, you ensure that everyone on the team can quickly get familiar with the product you want to design and work together to achieve your business goals and deliver a superior product to market. | testfort_inc |
1,915,981 | Automatização de Deploy com CI/CD | A automação de deploy com CI/CD (Integração Contínua e Entrega Contínua/Desdobramento Contínuo) tem... | 0 | 2024-07-08T14:47:22 | https://dev.to/annalaura2/automatizacao-de-deploy-com-cicd-26gg |
A automação de deploy com CI/CD (Integração Contínua e Entrega Contínua/Desdobramento Contínuo) tem se tornado uma prática essencial no desenvolvimento moderno de software. Este artigo irá explorar o que é CI/CD, seus benefícios, as principais ferramentas utilizadas, e como implementá-lo de maneira eficaz em seus projetos.
**1. Introdução ao CI/CD**
Integração Contínua (CI) e Entrega Contínua (CD) são práticas que visam melhorar a qualidade e a velocidade do desenvolvimento de software. A CI envolve a integração frequente do código de todos os desenvolvedores em um repositório compartilhado, onde são executados testes automáticos para garantir que o novo código não quebre o sistema existente. A CD, por sua vez, automatiza o processo de entrega de novas versões de software, permitindo que essas atualizações sejam lançadas de maneira rápida e confiável.
**2. Benefícios da Automação de Deploy**
A adoção de CI/CD traz inúmeros benefícios para equipes de desenvolvimento. Entre os principais, destaca-se a redução de erros humanos, uma vez que grande parte do processo é automatizada. Além disso, a entrega contínua permite que novas funcionalidades e correções de bugs cheguem aos usuários mais rapidamente, aumentando a satisfação do cliente. Outra vantagem significativa é a capacidade de identificar e corrigir problemas mais cedo no ciclo de desenvolvimento, o que reduz os custos associados a falhas de produção.
**3. Ferramentas de CI/CD**
Existem diversas ferramentas disponíveis para implementar CI/CD, cada uma com suas próprias características e benefícios. Entre as mais populares estão o Jenkins, uma ferramenta de automação open-source amplamente utilizada; o GitLab CI/CD, que oferece integração nativa com o GitLab; e o CircleCI, conhecido por sua facilidade de uso e integração com diversas plataformas. Outras opções incluem Travis CI, Bamboo, e Azure DevOps.
**4. Implementação do CI/CD**
Para implementar CI/CD, o primeiro passo é configurar um sistema de controle de versão, como Git, para gerenciar o código-fonte. Em seguida, é necessário configurar um servidor de CI, como Jenkins ou GitLab CI, para automatizar o processo de integração. Isso envolve a configuração de pipelines de build que compilarão o código, executarão testes automatizados e gerarão artefatos de build. Finalmente, configura-se a CD para automatizar o processo de deploy, garantindo que o software seja entregue de maneira contínua e segura.
**5. Pipelines de CI/CD**
Os pipelines de CI/CD são a espinha dorsal do processo de automação. Eles definem uma série de etapas que o código deve passar antes de ser considerado pronto para produção. Um pipeline típico pode incluir etapas como linting (verificação de estilo de código), testes unitários, testes de integração, build do artefato, e deploy para um ambiente de staging ou produção. A definição clara e a automação dessas etapas garantem consistência e qualidade em cada release.
**6. Melhores Práticas para CI/CD**
Para tirar o máximo proveito da CI/CD, é importante seguir algumas melhores práticas. Primeiro, mantenha seus pipelines simples e fáceis de entender. Isso facilita a manutenção e a colaboração entre os membros da equipe. Segundo, invista em uma boa cobertura de testes automáticos para garantir que o código esteja funcionando corretamente. Terceiro, monitore constantemente o desempenho dos pipelines e ajuste-os conforme necessário para otimizar a velocidade e a eficiência.
**7. Desafios e Soluções**
Implementar CI/CD pode apresentar alguns desafios, como a configuração inicial dos pipelines e a integração de ferramentas. Um dos principais desafios é garantir que todos os desenvolvedores sigam as mesmas práticas e padrões. Para superar esses desafios, é útil investir em treinamento e documentação, além de promover uma cultura de colaboração e melhoria contínua. Utilizar ferramentas que se integrem bem entre si também pode facilitar o processo.
**8. Segurança em CI/CD**
A segurança é um aspecto crucial da automação de deploy. É essencial garantir que o código seja revisado e testado rigorosamente antes de ser lançado em produção. Ferramentas de análise de segurança, como SonarQube, podem ser integradas aos pipelines para identificar vulnerabilidades. Além disso, é importante implementar controles de acesso adequados para garantir que apenas indivíduos autorizados possam modificar ou desencadear deploys.
**9. Casos de Sucesso**
Diversas empresas têm obtido sucesso com a implementação de CI/CD. A Amazon, por exemplo, realiza milhares de deploys por dia graças à automação de seus processos. A Netflix utiliza CI/CD para garantir que seu serviço esteja sempre disponível e funcionando sem interrupções. Esses casos de sucesso demonstram o poder da automação e como ela pode transformar a maneira como o software é desenvolvido e entregue.
**10. Conclusão**
A automação de deploy com CI/CD é uma prática fundamental no desenvolvimento de software moderno. Ela não apenas melhora a qualidade e a velocidade de entrega, mas também reduz os custos e aumenta a satisfação do cliente. Ao adotar CI/CD, as equipes de desenvolvimento podem se concentrar em criar software de alta qualidade, confiando que seus processos de integração e entrega estão sendo gerenciados de maneira eficiente e segura. Se você ainda não implementou CI/CD em seus projetos, agora é o momento perfeito para começar.
| annalaura2 | |
1,915,982 | Affiliate Marketing Là Gì? Cách Kiếm Tiền Online Bằng Affiliate | Terus Digital Marketing trực thuộc Terus là đơn cung cấp đơn vị cung cấp giải pháp số toàn diện. Phục... | 0 | 2024-07-08T14:50:50 | https://dev.to/terus_digitalmarketing/affiliate-marketing-la-gi-cach-kiem-tien-online-bang-affiliate-1dm9 | webdev, website, marketing, terus | Terus Digital Marketing trực thuộc Terus là đơn cung cấp đơn vị cung cấp giải pháp số toàn diện. Phục vụ chủ yếu mọi đối tượng kinh doanh tại HCM & toàn quốc. Với kinh nghiệm lĩnh vực dịch vụ SEO Tổng Thể Website Nâng Cao Thứ Hạng, Tối Ưu Chi Phí trong đó rất nhiều dự án lớn nhỏ đã và đang thành công chúng tôi luôn hướng tới sự phát triển bền vững và mối quan hệ cộng tác lâu dài với khách hàng. Sau đây, Terus Digital Marketing sẽ giới thiệu cho bạn về Affiliate Marketing.
Affiliate Marketing là một mô hình kinh doanh trực tuyến, trong đó một doanh nghiệp (được gọi là "nhà cung cấp" hoặc "nhà quảng cáo") trả tiền cho các cá nhân hoặc tổ chức ("nhà phân phối" hoặc "Affiliate") để giới thiệu hoặc quảng bá các sản phẩm, dịch vụ của họ. Khi một khách hàng tiềm năng thực hiện một hành động nhất định, chẳng hạn như thực hiện một giao dịch mua, đăng ký hoặc tải xuống, Affiliate sẽ nhận được một khoản hoa hồng.
Trong mô hình này, Affiliate hoạt động như những đại lý tiếp thị, sử dụng các kênh như website, blog, email marketing, video,... để thu hút khách hàng tiềm năng và chuyển họ thành khách hàng thực của nhà cung cấp. Nhà cung cấp, hay còn gọi là "người bán", sẽ trả hoa hồng cho các Affiliate dựa trên các giao dịch mà họ giới thiệu.
Ưu điểm của Affiliate Marketing:
1. Chi phí khởi động thấp
2. Đăng ký thật dễ dàng
3. Không cần lo lắng về việc vận chuyển hay trả lại hàng
4. Không cần lãng phí thời gian để tạo ra sản phẩm hoặc dịch vụ
5. Không có yêu cầu đặc biệt
6. Thu nhập thụ động, kiếm tiền mọi lúc, mọi nơi
Hạn chế của Affiliate Marketing:
1. Phải mất rất nhiều thời gian để tạo ra lưu lượng truy cập và lượt giới thiệu ổn định
2. Bạn phải có kiến thức tốt về Internet Marketing
3. Quảng cáo bị hạn chế nghiêm ngặt
4. Có các yêu cầu thanh toán
Để thành công trong Affiliate Marketing, bạn cần:
1. Xác định rõ thị trường ngách và lựa chọn sản phẩm/dịch vụ phù hợp.
2. Xây dựng nội dung hấp dẫn và có giá trị cho khách hàng.
3. Tối ưu hóa website/kênh truyền thông để thu hút và chuyển đổi khách hàng.
4. Liên tục theo dõi và cải thiện hiệu quả của chiến dịch.
5. Xây dựng mối quan hệ tốt với nhà cung cấp Affiliate.
Có thể nói Affiliate Marketing là một mô hình kinh doanh trực tuyến rất tiềm năng, mang lại nhiều lợi ích cho cả nhà quảng cáo và nhà phân phối. Tuy nhiên, để thành công, người tham gia cần phải có chiến lược và kỹ năng marketing hiệu quả, đồng thời tránh các sai lầm phổ biến. Nếu làm tốt, Affiliate Marketing có thể trở thành một nguồn thu nhập đáng kể.
Tìm hiểu thêm về [Affiliate Marketing Là Gì? Cách Kiếm Tiền Online Bằng Affiliate](https://terusvn.com/digital-marketing/affiliate-marketing-la-gi/)
Các dịch vụ khác tại Terus:
Digital Marketing:
* [Dịch vụ Chạy Facebook Ads Tối Ưu Chi Phí](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
* [Dịch vụ Chạy Google Ads Tăng Doanh Thu Vượt Trội ](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
Thiết kế Website:
[Dịch vụ Thiết kế Website Tích hợp Affiliate Marketing](https://terusvn.com/thiet-ke-website-tai-hcm/)
| terus_digitalmarketing |
1,915,983 | Python print() funtion | I am learned python basic concept today. share you what i learned today with example. The name as it... | 0 | 2024-07-08T14:54:59 | https://dev.to/karthik_guna_057168ec9458/python-print-funtion-44ea | python, function, print | I am learned python basic concept today.
share you what i learned today with example.
The name as it is print what you entered inside print function.
```
Ex : print("Hello")
```
In additionally you can pass variables like string or integer or other data types.
```
anime1 = "dragon ball"
anime2 = "naruto"
Ex : print("which anime do you like ? a=", anime1, " or b=", anime2)
```
| karthik_guna_057168ec9458 |
1,915,984 | A Importância da Cobertura de Código: Devo buscar 100% ? | Disclaimer Este texto foi concebido pela IA Generativa em função da transcrição do... | 0 | 2024-07-08T14:56:26 | https://dev.to/asouza/a-importancia-da-cobertura-de-testes-devo-buscar-100--3hho | ## Disclaimer
Este texto foi concebido pela IA Generativa em função da transcrição do episódio do nosso canal, Dev Eficiente. [O episódio completo pode ser visto no canal.](https://youtu.be/dmj53dbgv68)
## Resumo
No texto de hoje, quero falar com você sobre um tema crucial para a qualidade do software: a cobertura de testes. Devo buscar ou não devo buscar 100% de cobertura de testes? Vamos explorar essa questão juntos.
## A Evolução do Pensamento sobre Cobertura de Testes
No passado, eu acreditava que buscar 100% de cobertura de testes era um erro. Eu dizia que testar tudo era inútil e que você estava desperdiçando tempo. No entanto, com o tempo, estudos e através de conversas com especialistas como Maurício Aniche, meu entendimento sobre o papel dos testes automatizados mudou. Hoje, acredito que se você não está buscando 100% de cobertura, está possivelmente fazendo errado.
## A Realidade dos Testes
Quando falamos de cobertura de testes, não estamos dizendo que você deve testar cada linha de código de forma isolada. A ideia é que, ao escrever testes para os códigos que realmente importam, você acaba tocando nos outros códigos de produção que não precisam de testes diretos. Isso significa que, ao testar de forma mais integrada, você cobre uma maior parte do seu código de maneira eficiente.
## Como eu lido com a Pirâmide de Testes
A famosa pirâmide de testes sugere que devemos ter uma base sólida de testes de unidade, seguidos por testes de integração e, testes end-to-end e, por fim, testes manuais.
Não é como tenho atualmente. Minha abordagem atual é testar o mais próximo possível da realidade. Se eu conseguir testar tudo de forma rápida e integrada, farei isso. Claro, a velocidade é uma restrição, e nem sempre é viável subir todos os sistemas para executar testes integrados o tempo todo.
## A Busca pela Cobertura Ideal
Meu objetivo é exercitar o código da maneira mais integrada possível. Supondo que todo código escrito para produção é necessário, espero tocar em 100% do código.
Pensando em projeto green field, espero sempre 90%+ de cobertura. Afinal de contas a lógica básica é: Se eu testo mais, aumento a confiabilidade.
A cobertura de testes é uma via para aumentar a confiabilidade do software, minimizando bugs e problemas em produção.
## Conclusão
A busca por 100% de cobertura de testes, maximizando integração, é uma meta ambiciosa, mas que traz inúmeros benefícios. Ela aumenta as chances do código ser exercitado de maneira integrada, revelando problemas o mais cedo possível.
Espero que este post tenha sido útil para você. Deixe seu comentário, seja ele positivo ou construtivo. Terei o maior prazer em responder. Até a próxima!
## Sobre a Jornada Dev + Eficiente
Se você quer se aprofundar mais nas práticas de qualidade de código e se tornar um profissional reconhecido tecnicamente, [convido você a conhecer a Jornada Dev + Eficiente. ](https://deveficiente.com/condicao-especial-30)
| asouza | |
1,915,986 | Take the 2024 open source maintainer survey! | It is time for the third installment Tidelift state of the open source maintainer survey! Do you... | 0 | 2024-07-08T15:01:37 | https://dev.to/tidelift/take-the-2024-open-source-maintainer-survey-3692 | opensource, maintainers, survey, news | <p>It is time for the third installment Tidelift state of the open source maintainer survey! </p>
<!--more-->
<p><strong>Do you actively maintain one or more open source projects?</strong> If so, we'd love to learn from you. </p>
<p>The reports that come out of the Tidelift maintainer survey have become an important industry resource cited in hundreds of articles and studies over the past few years. We want to ensure your voice is represented in our 2024 state of the open source maintainer report.</p>
<p>The data we collect will help Tidelift better support open source maintainers like you and make the case for getting you the resources you need to continue maintaining healthy and secure open source projects.</p>
[](https://tidelift.az1.qualtrics.com/jfe/form/SV_8cfOxXluZDsXrrE?utm_source=devto&utm_medium=post&utm_campaign=survey2024)
<p>We appreciate your time, and after taking a moment to share your thoughts, <strong>we'd like to thank you with a few cool perks</strong>!</p>
<p>All open source maintainers who fill out the survey will receive our brand new 2024 pay the maintainers t-shirt. If you are already working as a lifter for Tidelift (or apply to lift a project and are accepted as a new lifter), we'll send you the t-shirt AND a custom Tidelift lifter hoodie (<a href="https://tidelift.com/about/lifter"><span>learn more about how lifting a project works here</span></a>). </p>
<h2>Here’s a preview of what we’re interested in learning this year</h2>
<p>We’re curious how things have changed since we conducted the 2023 survey at the end of 2022. Are more maintainers earning money? If so, where is it coming from? And if not, what additional work would you be willing to do if you <em>were</em> getting paid? What’s motivating you to keep working on your projects? What is holding you back from doing your best work? Has anything changed in the way you approach your maintenance work since <a href="https://blog.tidelift.com/xz-tidelift-and-paying-the-maintainers"><span>xz utils</span></a>? Has AI had an impact?</p>
<h2>More about our maintainer surveys</h2>
<p>We released the first maintainer survey report <a href="https://tidelift.com/subscription/the-tidelift-maintainer-survey"><span>back in 2021</span></a> and the <a href="https://tidelift.com/open-source-maintainer-survey-2023"><span>second in 2023</span></a>. Check out the top headlines from last year’s report:</p>
<ul style="font-size: 20px;">
<li style="color: #000000;" aria-level="1"><a href="https://blog.tidelift.com/despite-increasing-demands-most-maintainers-still-dont-get-paid-for-their-work"><span>Despite increasing demands, most maintainers don’t get paid for their work</span></a></li>
<li style="color: #000000;" aria-level="1"><a href="https://blog.tidelift.com/the-more-maintainers-get-paid-the-more-they-work-on-open-source"><span>The more maintainers get paid, the more they work on open source</span></a></li>
<li aria-level="1"><a href="https://blog.tidelift.com/paid-maintainers-do-more-security-and-maintenance-work-than-unpaid-maintainers"><span>In a shocker, paid maintainers do more security and maintenance work than unpaid maintainers</span></a></li>
<li aria-level="1"><a href="https://blog.tidelift.com/maintainers-want-to-do-creative-work-that-matters"><span>Maintainers want to do creative work that matters and makes an impact</span></a></li>
<li aria-level="1"><a href="https://blog.tidelift.com/open-source-maintenance-can-be-stressful-lonely-and-financially-unrewarding"><span>Open source maintenance can be stressful, lonely, and financially unrewarding</span></a></li>
<li aria-level="1"><a href="https://blog.tidelift.com/maintainer-burnout-is-real"><span>Almost 60% of maintainers have quit or considered quitting maintaining one of their projects</span></a></li>
</ul>
[Take the survey now!](https://tidelift.az1.qualtrics.com/jfe/form/SV_8cfOxXluZDsXrrE?utm_source=devto&utm_medium=post&utm_campaign=survey2024)
<p><em>Your submissions to this survey are covered by the Tidelift privacy policy. If you are interested in the custom Tidelift lifter hoodie or the pay the maintainers t-shirt, make sure to share your contact information at the end of the survey. If you are not a maintainer partner, we can only ship t-shirts to North America, Europe, South America, and Australia (we're sorry!), but we would still love to hear from you.</em></p> | cdgrams |
1,915,987 | 🧩 100 FREE Frontend Challenges – Sharpen Your Skills! | Hey again 👋 This weeks newsletter is jam-packed with great reads and resources, here's a quick... | 0 | 2024-07-10T17:30:00 | https://dev.to/adam/100-free-frontend-challenges-sharpen-your-skills-67c | webdev, css, design, ux | **Hey again** 👋
This weeks newsletter is jam-packed with great reads and resources, here's a quick look:
🛠️ Transitioning to Auto Height in CSS
🔧 Essential JS Snippets for Modern Web Features
👁️ Taku Satoh's Design Philosophy
Enjoy 🤗 - Adam at Unicorn Club.
---
## 📬 Want More? Subscribe to Our Newsletter!
Get the latest edition delivered straight to your inbox every week. By subscribing, you'll:
- **Receive the newsletter earlier** than everyone else.
- **Access exclusive content** not available to non-subscribers.
- Stay updated with the latest trends in design, coding, and innovation.
**Don't miss out!** Click the link below to subscribe and be part of our growing community of front-end developers and UX/UI designers.
🔗 [Subscribe Now - It's Free!](https://unicornclub.dev/ref=devto)
---
Sponsored by [Webflow](https://go.unicornclub.dev/webflow-no-code)
## [Take control of HTML5, CSS3, and JavaScript in a completely visual canvas](https://go.unicornclub.dev/webflow-no-code)
[](https://go.unicornclub.dev/webflow-no-code)
Let Webflow translate your design into clean, semantic code that’s ready to publish to the web, or hand off to developers.
[Get started — it's free](https://go.unicornclub.dev/webflow-no-code)
---
## 💻 Dev
[**45 CSS Breadcrumb Examples**](https://www.frontendplanet.com/css-breadcrumb-examples/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
Explore 45 CSS breadcrumb designs, featuring dark mode, pixel-perfect accuracy, and stylish hover effects.
[**Browser Support Tests in JavaScript for Modern Web Features**](https://frontendmasters.com/blog/browser-support-tests-in-javascript-for-modern-web-features/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
This is just a no-frills post with code snippets showing how to test support for some newish features in HTML, CSS, and JavaScript.
[**Transitioning to Auto Height**](https://css-tricks.com/transitioning-to-auto-height/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
The news: transitioning to auto is now a thing! Well, it’s going to be a thing.
[**100 FREE Frontend Challenges**](https://dev.to/bigsondev/100-free-frontend-challenges-3f0?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
In the spirit of building strong habits and the #100DaysOfCode idea, we decided to make our list of beautifully crafted "Design To Code" challenges publicly available.
---
### **💭 Fun Fact**
******Pixel Etymology****** - The term "pixel", short for "picture element", was coined in 1969 to describe the individual components of a television image. This term has become fundamental in digital design, referring to the smallest element of an image on digital displays.
---
## 🔘 Design + UX
[**Just enough design (and no more)**](https://designlobster.substack.com/p/150-just-enough-design-and-no-more?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
We’ll be exploring the work and ideas of Japanese designer Taku Satoh—from chopsticks to children’s toys and imitation sushi for Issey Miyake.
[**Why toggle switches suck (and what to do instead)**](https://adamsilver.io/blog/why-toggle-switches-suck-and-what-to-do-instead/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
The idea of using toggle switches (over radio buttons and checkboxes).
[**Everything New in Figma**](https://buttondown.email/joeyabanks/archive/baseline-19-config-2024-everything-new-in-figma?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
Figma’s 2024 Config conference just wrapped up, and the team shared several new and exciting product announcements.
[**The Power of Grids in Design**](https://blog.shwetakaushal.com/the-power-of-grids-in-design?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
When we think about the most impactful designs, whether in print, web, or user interfaces, we often notice a balance and harmony that makes the design attractive and functional.
[**T-Shaped vs. V-Shaped Designers**](https://www.smashingmagazine.com/2024/06/t-shaped-vs-v-shaped-designers/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
Many job openings in UX assume very specific roles with very specific skills. Product designers should be skilled in Figma. Researchers should know how to conduct surveys. UX writers must be able to communicate brand values.
## 🗓️ Upcoming Events
Check out these events
### [🔘 Hatch Conference](https://www.hatchconference.com/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
_Remote • Berlin_
The event where experienced UX & Design Professionals in Europe meet to learn, get inspired and connect. 4-6 September
[See event →](https://www.hatchconference.com/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
### [💻 Intersection](https://www.intersection-conference.eu/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
_Turin, Italy_
Where Design meets Development. This year, we’re reimagining the way professionals engage with technology, under the theme “Revolutionizing User Interfaces: The Dawn of Intuitive Digital Worlds“. 3-4 October
[See event →](https://www.intersection-conference.eu/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
### [🟨 UtahJS Conference](https://utahjs.com/conference/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
_Sandy, UT_
The 2024 UtahJS Conference will be a 1-day conference on Friday, September 13
[See event →](https://utahjs.com/conference/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
### [🧠 UX Y'all Conference](https://www.uxyall.org/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
_North Carolina_
Our local conference encourages our community to share their knowledge and experience. Lightning talks, keynote talks, local speakers and workshops. September 19-20
[See event →](https://www.uxyall.org/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
## 🔥 Promoted Links
_Share with 2,500+ readers, book a [classified ad](https://unicornclub.dev/sponsorship#classified-placement)._
[**What Current & Future Engineering Leaders Read.**](https://go.unicornclub.dev/pointer)
Handpicked articles summarized into a 5‑minute read. Join 35,000 subscribers for one issue every Tuesday & Friday.
[**Be a leader. Outperform the competition 🚀**](https://go.unicornclub.dev/open-source-ceo)
Join 30,000+ weekly readers at Google, Canva, Stripe, TikTok, Sequoia and more. Come learn with us!
#### Support the newsletter
If you find Unicorn Club useful and want to support our work, here are a few ways to do that:
🚀 [Forward to a friend](https://preview.mailerlite.io/preview/146509/emails/125031340327307057)
📨 Recommend friends to [subscribe](https://unicornclub.dev/)
📢 [Sponsor](https://unicornclub.dev/sponsorship) or book a [classified ad](https://unicornclub.dev/sponsorship#classified-placement)
☕️ [Buy me a coffee](https://www.buymeacoffee.com/adammarsdenuk)
_Thanks for reading ❤️
[@AdamMarsdenUK](https://twitter.com/AdamMarsdenUK) from Unicorn Club_ | adam |
1,915,991 | Join Our Facebook Community for Exclusive Bad Bunny Merch Updates! | Want to be the first to know about our latest Bad Bunny merch? Like our Facebook page to get... | 0 | 2024-07-08T15:09:32 | https://dev.to/badbunnymerch12/join-our-facebook-community-for-exclusive-bad-bunny-merch-updates-28b1 | badbunnymerch, facebook, badbunny | Want to be the first to know about our latest Bad Bunny merch? Like our Facebook page to get exclusive updates, sneak peeks, and special offers. Connect with other Bad Bunny fans and share your love for the hottest merch around!
https://www.facebook.com/badbunnymerchshop/
 | badbunnymerch12 |
1,915,992 | dfasdfas | A post by ELIZABETH ROSE TIPPERY | 0 | 2024-07-08T15:10:31 | https://dev.to/maishusu/dfasdfas-1ajf | maishusu | ||
1,915,993 | how to deploy backend | After our bootcamp was done, we were instructed to create a portfolio website and given my interests... | 0 | 2024-07-08T15:50:19 | https://dev.to/ashleyd480/how-to-deploy-backend-4b05 | beginners, backend, heroku, webdev | After our bootcamp was done, we were instructed to create a portfolio [website](https://ashley-boost-portfolio.netlify.app/) and given my interests in both frontend (my creative side) and backend (my curiosity for more of the technical engine that drives a site)- I decided to make my site fullstack. For example, I wanted to host my data tables for my bootcamp scores and feedback and projects on the backend.
When researching the how-to, I noticed a lack of resources for deploying a fullstack site and specifically for how to deploy the backend.
I wanted to create a resource to share my learnings on how to deploy the backend with the hope that this can help others. To preface, yes- there can be multiple approaches, but this here is specifically the approach my mentor and I took to deploy the backend. Please also note- that yes Heroku does require a card (~ $5, $9/ month), so if you want to avoid the billing - you can also just hardcode your backend data on the frontend. :)
To start off - we used Heroku to deploy and locally- we used Postgres to store the data. It is a very beginner-friendly site compared to some of the other more complex sites like AWS. In the steps below, we will go over:
- [Account Set Up](#account-set-up)
- [Postgres Set Up](#postgres-set-up)
- [Migrating Data](#migrating-data)
- [Connecting to Frontend] (#connecting-to-frontend)
Before reading these steps, please note that the terminal commands are specific to Mac. As deployment strategies can vary based on your versions, below are the specific versions of systems I had:
- React: 18.2.0
- Spring: 3.3.0
- Postgres: 14.12
Another optional thing to note if it makes it easier for you to organize the work, you will want to have a separate repo on Github respectively for both the frontend and backend.
<hr>
## Account Set Up
### 1. To start, you can go to heroku.com to sign up.
### 2. Click `Create New App` or navigate to:
https://dashboard.heroku.com/apps

### 3. Give your app a name.

### 4. From your Dashboard (https://dashboard.heroku.com/apps), click on the app name.

### 5. Add a system.properties file to your backend Java project.
Note: please keep the page open from step 4. This step 5 is done on your IDE to ensure proper deployment.
You will add this `system.properties` file to the root of your backend Java project directory. In this file, add the following:
Make sure you replace the `21` with your actual version number of Java.
```
java.runtime.version=21
```
This file helps Heroku understand which version of Java to use when running your application. Without it, Heroku might default to a version that is not compatible with your application, leading to potential runtime issues.
### 6. Click on deploy and click `Connect to Github`.
After connecting your GitHub repository, you'll see options to set up automatic deployments. Choose the GitHub branch you want Heroku to deploy automatically from (usually main or master). Optionally, enable automatic deploys so that every time you push to the chosen branch on GitHub, Heroku will automatically deploy those changes.

If you are having issues with automatic deployment, you may manually deploy. You will see that option at the bottom of the Deploy page.

After successful deployment, Heroku assigns a URL to your application. You can find this URL either in the output of the deployment process or by logging into your Heroku account and navigating to your app's dashboard. Make sure you write that URL down in a secure spot, as this will be the updated endpoint for your API calls.
Once you have obtained the URL (i.e. `https://your-app-name.herokuapp.com`), you can use it to make HTTP requests to your backend API endpoints deployed on Heroku. Replace `your-app-name` with the actual name of your Heroku app.
For example, if your backend API endpoint is `/api/data`, your complete URL for making requests would be `https://your-app-name.herokuapp.com/api/data`.
### 7. Check the logs to verify the correct Java version is being used.
Logs for both successful and unsuccessful builds are available from your app’s Activity tab in the Heroku Dashboard. You may read more steps for this under Heroku Logs documentation [here](https://devcenter.heroku.com/articles/logging).
After deploying, you should also double check the Heroku deployment logs to ensure the correct Java version is being used:
- Go to the Heroku dashboard, select your app, and navigate to the "More" dropdown in the top-right corner. Choose "View Logs."
- In the logs, look for lines that mention the Java runtime version. This will help you confirm that the correct version is being used.
- If the version is incorrect, update the `system.properties` file with the correct version number and redeploy.
<hr>
## Postgres Set Up
Now let’s say you want to connect your existing Postgres database, i.e. say you are using Spring Boot and in the front end, you make the API call to the endpoints to `get` your data to display.
### 1. Click on the dots at top right next to your profile picture, and select `Data`.

### 2. Click `Create one` under `Heroku Postgres` and click on `install`.

### 3. Select the plan so you can host your database on Heroku.
The ones recommended for small-scale projects like a portfolio are `Essential 0` ($5/month) or `Essential 1` ($9/month). I got the latter.

<hr>
## Migrating Data
Now that the postgres instance is set up, now we can migrate our data from our local postgres to Heroku remote. An analogy is when we push code from local to your remote github repo, except in this case we are “pushing” our data.
### 1. First, you will want to install Docker.
Docker allows you to package an application along with its dependencies and configuration settings into a single, portable unit called a container. This makes it easier to develop and deploy applications. For example, when you use Docker to run PostgreSQL commands like `pg_dump` (to back up a database) or `pg_restore` (to restore a database), these commands execute within this isolated environment.
In our case, using Docker resolved a version conflict issue where Heroku server version of Postgres was 16.2 (and you can see that version [here](https://devcenter.heroku.com/articles/heroku-postgres-version-support#:~:text=Heroku%20deprecates%20these%20versions%20to,version%2016%20as%20the%20default.)), but the `pg_dump` version was 14.12
You may type the command below on your terminal
```
brew install docker
```
… or if your terminal is being cranky and doesn’t want to work, then you can go to the Docker [website](https://www.docker.com/get-started/) and download Docker Desktop for Mac. Once installed, open the “Docker Desktop” application, skip through the various prompts until the docker daemon is running. Then you can close the docker desktop window and proceed to run docker commands in your shell session.
### 2. Then, you have to use `pg_dump` to create a create a backup file (*.dump) of your database for migration. You can run this command within Docker.
Type in your terminal the following:
You may see some abbreviated letters like `-h` and `-U`, etc. Here’s what they stand for and make sure to replace the word following the abbreviated letter for the following:
- **-h** is the host name, so if you are not locally using `localhost`, replace it with your host.
- **-U** is your local postgres username.
- **-d** is the name of your local postgres database.
- **-f** ashley-portfolio.dump: Specifies the filename (ashley-portfolio.dump) to save the dump.
```
docker run postgres:16 pg_dump -Fc --no-acl --no-owner -h localhost -U postgres -d ashley-portfolio-database -f ashley-portfolio.dump
```
Notice above we put `postgres:16` so that it will match the server version of Heroku.
Note also: If you do not change directories (cd) before running `pg_dump`, the dump file will typically be saved in your home directory (~/) or wherever your terminal session is currently located.
Some more details if you are curious on the how and why behind this: `pg_dump` allows you to create a snapshot (backup) of your local PostgreSQL database (ashley-portfolio-database). This backup is crucial for preserving your data before transferring it to Heroku.
The backup file created by `pg_dump` (ashley-portfolio.dump) contains a consistent snapshot of your database, including the SQL commands necessary to remake the table structure. This file can then be transferred and restored into a different PostgreSQL database environment, such as one hosted on Heroku.
### 3. Upload your database `pg_dump` backup file to Heroku:
Now that you have created a backup of your local PostgreSQL database using `pg_dump`, the next step is to migrate this data to your PostgreSQL database hosted on Heroku.
Use `pg_restore` to upload your local database backup (ashley-portfolio.dump) to your Heroku PostgreSQL database:
```
docker run -e PGPASSWORD=yourHerokuDatabasePassword -v /path/to/your/backup:/my-portfolio-backup postgres:16 pg_restore -h herokuHostName -p 5432 -U yourHerokuUsername -d yourHerokuDatabaseName /my-portfolio-backup
```
Replace `/path/to/your/backup`, `yourHerokuDatabasePassword`, `herokuHostName`, `5432`, the postgres version:16, `yourHerokuUsername`, `yourHerokuDatabaseName` with your actual paths and database details.
You can find this information by clicking the 9 dots at the top right of your Heroku page next to your profile picture. Click on `Data`. From there, it will open a page and you will see `Datastores`. Click on your datastore. Then you will see another page open, and at the top bar, it will say `Overview`, `Durability`, `Settings`. Make sure you select `Settings`. From there, click on `View Database Credentials`. Make sure that you don’t write this information down in an insecure environment.
Here is an explanation of what that Docker command means:
- **docker run:** This command starts a new Docker container.
- **-e PGPASSWORD=yourHerokuDatabasePassword:** This sets an environment variable inside the container. Here, PGPASSWORD is being set to your Heroku database password. Environment variables are a way to pass configuration settings to your application.
- **-v /path/to/your/backup:/my-portfolio-backup:** This option links a directory on your host machine (your computer) to a directory inside the container. The part before the colon (/path/to/your/backup) is the path on your host machine where your backup file is stored. The part after the colon (/my-portfolio-backup) is the path inside the container where the backup file will be accessible.
- **postgres:16:** This specifies the Docker image to use, in this case, version 16 of the PostgreSQL image. This ensures compatibility and forces the container to run with PostgreSQL version 16.
- **pg_restore:** This is the command that will run inside the container. It restores a PostgreSQL database from a backup file.
- **-h herokuHostName:** This specifies the host name of your Heroku PostgreSQL database.
- **-p 5432:**This specifies the port number to connect to. The default port for PostgreSQL is 5432.
- **-U yourHerokuUsername:** This specifies the username to connect to the PostgreSQL database on Heroku.
- **-d yourHerokuDatabaseName:** This specifies the name of the PostgreSQL database to restore the backup into.
- **/my-portfolio-backup:** This is the path to the backup file inside the container. It should match the directory we mounted earlier.
### 4. The last step is to have the migrated database appear on your local Postgres (this way if you need to view the data, or make updates- you can do so).
Using the aforementioned steps from step 3 on how to view your database Heroku credentials, make sure you keep that page open.
Open Postgres from your desktop. Right click on `Servers` in Postgres and then click `Register` and select `Server.`
From there, in the popup, select `Connections` and type in the info from your Heroku database credentials.

Now, when you open up Postgres, you should see a server called heroku. When you expand it, you will see `databases`. You can search for the one that matches the `database` name in your Heroku database credentials. Once you expand the database, click on `Schemas` to expand. Then, click on `public` to expand and then from there click on `Tables` to expand. Now, you can go to each table and run queries.
<hr>
## Connecting to Frontend
1. To set up the frontend you can deploy your frontend repo on Github on Netlify via these steps found in Netlify documentation.
2. You will want to make sure that for any API calls you make on the frontend code, that the API URL is updated from localhost to the new URL (i.e. `https://your-app-name.herokuapp.com`) and you may refer Account Setup Step 5 for a refresher on that. :)
3. After you finish deploying on Netlify, then you will get a link to your website which you can visit to visually see your fullstack site.
| ashleyd480 |
1,915,994 | Node Selectors, Labels, Selectors, Static Pods, and Manual Scheduling | Hello everyone, welcome back to the CK 2024 series!! Today we will cover node selectors, labels and... | 0 | 2024-07-08T15:13:55 | https://dev.to/jensen1806/node-selectors-labels-selectors-static-pods-and-manual-scheduling-47bn | kubernetes, docker, devops, containers |
Hello everyone, welcome back to the CK 2024 series!! Today we will cover node selectors, labels and selectors, static pods, and manual scheduling. I highly recommend you read through the previous blogs to grasp the concepts thoroughly before proceeding with this one.
### Kubernetes Architecture Recap
To begin, let's revisit a familiar diagram from previous videos - the Kubernetes sample architecture, the control plane components, and the worker nodes running various workloads. One key component in the control plane is the scheduler, which decides which pod goes on which node.
When you create a new pod using the kubectl command, the request goes to the API server. The API server creates an entry in the etcd database, and the scheduler then decides which node the new pod should be scheduled on. It sends the request to the kubelet on the chosen node, which then runs the pod.
### Static Pods
A critical aspect of Kubernetes is how control plane components are managed. These components, such as the scheduler, API server, and controller manager, are often run as static pods. Static pods are not managed by the scheduler but by the kubelet directly. This ensures that essential control plane components are always running, even if the scheduler itself is down.
Static pods are defined in YAML files located in a specific directory on the control plane node, typically /etc/kubernetes/manifests. The kubelet monitors this directory and ensures that all the pods defined there are running.
### Hands-On Demo: Static Pods
Let's look at a hands-on demo to understand static pods better:
1. **Accessing the Control Plane Node**:
If you're using a kind cluster, which runs Kubernetes in Docker containers, you can access the control plane node with the docker exec command.
```
docker exec -it <control-plane-container-name> bash
```
2. **Verifying Static Pods**:
Once inside the control plane node, navigate to the /etc/kubernetes/manifests directory.
```
cd /etc/kubernetes/manifests
```
3. **Managing Static Pods**:
You can see YAML files for various control plane components. For example, to stop the scheduler, you can move its YAML file out of this directory.
```
mv kube-scheduler.yaml /tmp/
```
This action will stop the scheduler pod. You can verify this by checking the pods in the kube-system namespace.
```
kubectl get pods -n kube-system
```
4. **Restarting Static Pods:**
To restart the scheduler, move the YAML file back to the manifests directory.
```
mv /tmp/kube-scheduler.yaml /etc/kubernetes/manifests/
```
### Manual Scheduling
Let's walk through manual scheduling with a demo:
1. **Create a Pod YAML:**
Generate a YAML file for a new pod.
```
kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml
```
2. **Edit the YAML:**
Open the YAML file and add the nodeName field under the spec section to specify the node.
```
spec:
containers:
- image: nginx
name: nginx
nodeName: worker-node1
```
3. **Apply the YAML:**
Deploy the pod using the edited YAML file.
```
kubectl apply -f pod.yaml
```
4. **Verify Pod Scheduling:**
Check that the pod is running on the specified node.
```
kubectl get pods -o wide
```
### Labels and Selectors
Labels and selectors are key in Kubernetes for organizing and managing resources. Labels are key-value pairs attached to objects, and selectors allow you to filter resources based on these labels.
1. **Add Labels:**
Add labels to your pod.
```
metadata:
labels:
tier: frontend
app: nginx
```
2. **Apply Changes:**
Update the pod with new labels.
```
kubectl apply -f pod.yaml
```
3. **Use Selectors:**
Filter pods using labels.
```
kubectl get pods -l tier=frontend
```
### Conclusion
In our latest discussion, we delved into several essential concepts for managing and optimizing Kubernetes deployments: node selectors, labels and selectors, static pods, and manual scheduling. Understanding these elements is crucial for effectively handling your Kubernetes environment.
Thank you for reading, and stay tuned for our next blog post!
For further reference, check out the detailed YouTube video here:
{% embed https://www.youtube.com/watch?v=6eGf7_VSbrQ&list=WL&index=15&t=7s %} | jensen1806 |
1,915,995 | Lado Okhotnikov on mocap technology for Meta Force | The recently published article on the mo-cap technology reveals an interesting case of its... | 0 | 2024-07-08T15:15:55 | https://dev.to/ncs_music_31bd885261814f7/lado-okhotnikov-on-mocap-technology-for-meta-force-5ai6 | The recently published [article on the mo-cap technology](urlhttps://beincrypto.com/lado-okhotnikov-about-an-integral-part-of-the-project/) reveals an interesting case of its introduction to metaverses. Lado Okhotnikov, the visionary behind Meta Force, asserts innovative technologies are just necessary to create something intriguing and innovative in the digital realms. In particular, his platform tried to take advantage of the mocap technology. It has become indispensable in the realm of expensive projects worth millions of dollars. It is really preferable to resort to it when a large budget is involved. The article shows how it works.
Why integrate Motion Capture
This innovative technology is used, for instance, in blockbuster games such as Halo 4, LA Noire, and Beyond: Two Souls and lots of other hits. The technology enables actors to infuse their digital counterparts with lifelike movements, transcending traditional voice acting to embody characters authentically on-screen.
A new standard in Metaverse experiences
Meta Force also utilizes the technology pursuing unparalleled realism. According to the Metaworld'd concept, its inhabitants should seamlessly mirror reality. It is important to achieve it for complete immersion of users, without any artificial undertones.
Lado Okhotnikov envisions setting a new standard in Metaverse experiences. He was inspired by notable games and decided to borrow some techniques from games like [GTA V](https://www.rockstargames.com/gta-v). What stands them apart are character animations, which are derived from real-life performances rather than computer simulations.
Mo-cap in Meta Force: how it happens
Lado says that understanding the mo-cap mechanics was just the beginning. The importance of spontaneity and improvisation became evident as the team explored virtual environments. It involved rigorous experimentation and even Lado personally participated in it. He wanted to ensure that his digital avatar reflects true-to-life movements.
Achieving realism in tasks like this is a continuous process. In Meta Force the dedicated team of professionals, including mo-cap designers, works over it. The members of the team try to refine every detail. Their expertise enhances animations, ensuring that the final product is visually captivating and devoid of the artificiality found in conventional animation.
The role of the mocap designer is instrumental. The attempt to breathe life into visuals adds new dynamism to the objects. There are numerous challenges on the way as users delve deeper into the animation.
Meta Force by Lado Okhotnikov represents more than just a virtual platform since it embodies a paradigm shift in the development of virtual worlds. The team’s vision is to blur the frontiers between the real and virtual worlds. The platform is introducing a new and original way of evolving virtual environments on a worldwide scale. MetaForce is breaking new ground that offers novel ideas and methods within the realm of virtual environments.
Promising project by Lado Okhotnilov
The project can acquire tremendous popularity. The platform is looking forward to a future scenario where many tools are available for users to interact within the Metaverse. The members of the community will be able to explore and operate within a vast and unrestricted space.
Lado Okhotnikov always emphasizes the concept of decentralization, regardless of the activity. It forms an absolutely different approach within the community and helps to revolutionize the platform and develop without the control of a single entity. It empowers users to navigate a limitless, decentralized realm and employ almost unlimited possibilities for growth and development.
About company
Meta Force is a company developing unique Metaverse based on Polygon blockchain. The Metaverse is optimized for business applications.
Lado Okhotnikov is a CEO of Meta Force, expert in IT and crypto industry.
Based on Dan Michael materials
The head of Meta Force Press Center
press@meta-force.space
#ladookhotnikov
| ncs_music_31bd885261814f7 | |
1,915,996 | Follow Us on X for Real-Time Bad Bunny Merch News! | Stay up-to-date with all things Bad Bunny merch by following us on X! Get real-time updates on new... | 0 | 2024-07-08T15:17:04 | https://dev.to/badbunnymerch12/follow-us-on-x-for-real-time-bad-bunny-merch-news-1d2k | badbunnymerch, twitter, badbunny | Stay up-to-date with all things Bad Bunny merch by following us on X! Get real-time updates on new arrivals, flash sales, and much more. Join the conversation and tweet us your favorite Bad Bunny merch moments!
https://x.com/BadBunny12usa
 | badbunnymerch12 |
1,915,997 | Translatable Columns Using Laravel | Brief In multi-language projects, it’s essential to store phrases in different languages for text... | 0 | 2024-07-08T15:21:58 | https://dev.to/__b9cd1fa82fb7434/translatable-columns-using-laravel-21d6 | laravel, architecture, designpatterns, performance | **Brief**
In multi-language projects, it’s essential to store phrases in different languages for text columns. For example, a multi-language articles site needs to store the title and body phrases for all supported languages. This can become a challenge if not handled correctly. In this article, we will explore different architectures with examples, comparing query efficiency and modifiability between them.
**Architectures**
**1- Separate Columns for Each Language**
The simplest architecture involves having a column for each language, such as title_ar and title_en. Using Laravel, working with this setup can be straightforward as shown in the example below.
**Pros:**
Phrases are treated as regular text columns, allowing for efficient search and sort operations using SQL.
Easy to implement.
**Cons:**
Limited modifiability; adding a new language requires running a migration to add new columns.
Having multiple translatable columns for multiple languages can rapidly increase the column count, potentially affecting query efficiency.
**Example:**
```
// Migration for articles table
Schema::create(‘articles’, function (Blueprint $table) {
$table->id();
$table->string(‘title_en’);
$table->string(‘title_ar’);
$table->text(‘body_en’);
$table->text(‘body_ar’);
$table->timestamps();
});
//Article model class
use Illuminate\\Database\\Eloquent\\Model;
use Illuminate\\Support\\Facades\\App;
class Article extends Model
{
// List of fillable attributes
protected $fillable = [‘title_en’, ‘title_ar’, ‘body_en’, ‘body_ar’];
protected function title(): Attribute
{
return Attribute::make(
get: fn() => $this->getTranslatableAttribute(‘title’),
set: fn($value) => $this->setTranslatableAttribute(‘title’, $value)
);
}
protected function body(): Attribute
{
return Attribute::make(
get: fn() => $this->getTranslatableAttribute(‘body’),
set: fn($value) => $this->setTranslatableAttribute(‘body’, $value)
);
}
private function getTranslatableAttribute($attribute)
{
$locale = App::getLocale();
$localizedAttribute = "{$attribute}_{$locale}";
return $this->{$localizedAttribute};
}
private function setTranslatableAttribute($attribute, $value)
{
$locale = App::getLocale();
$localizedAttribute = "{$attribute}_{$locale}";
$this->attributes[$localizedAttribute] = $value;
}
}
```
**2- JSON Column (Spatie Package)**
Using JSON columns can avoid creating many separate columns but adds complexity to sorting, searching, and querying. This architecture is suitable for projects that do not require complex queries on the phrases. You can easily apply this architecture in Laravel by installing the laravel-translatable package from Spatie.
**Pros:**
Easy to implement using the Spatie package.
Avoids a large number of columns.
**Cons:**
Increased complexity in handling JSON.
Sorting, searching, and querying can become complex and inefficient.
**Example:**
```
// Migration for articles table with JSON column
Schema::create(‘articles’, function (Blueprint $table) {
$table->id();
$table->json(‘title’);
$table->json(‘body’);
$table->timestamps();
});
// Sample usage with Spatie package
use Spatie\\Translatable\\HasTranslations;
class Article extends Model
{
use HasTranslations;
public $translatable = ['title', 'body'];
}
```
**3- Separate Phrases Table**
Using a separate table to store phrases is the most suitable option for projects that heavily rely on multiple languages. Regardless of the number of languages or rows, it involves a simple join operation. However, implementing this architecture can be complex.
**Pros:**
Efficient queries.
Phrases are treated as regular text columns, allowing for efficient search and sort operations using SQL.
Adding or deleting a language does not require significant changes.
**Cons:**
Complexity in implementation.
Requires joining with another table to retrieve the phrases.
**Example:**
```
// Migration for articles table
Schema::create('articles', function (Blueprint $table) {
$table->id();
$table->timestamps();
});
// Migration for phrases table
Schema::create('phrases', function (Blueprint $table) {
$table->id();
$table->foreignId('article_id')->constrained()->onDelete('cascade');
$table->string('language');
$table->string('key');
$table->text('value')->nullable();
$table->timestamps();
});
// Article model class
class Article extends Model
{
public function phrases()
{
return $this->hasMany(Phrase::class);
}
protected function title(): Attribute
{
$locale = App::getLocale();
return Attribute::make(
get: fn() => $this->phrases
->where('key', 'title')
->where('language', $locale)
->first()
?->value ?? '',
);
}
protected function body(): Attribute
{
$locale = App::getLocale();
return Attribute::make(
get: fn() => $this->phrases
->where('key', 'body')
->where('language', $locale)
->first()
?->value ?? '',
);
}
}
//Phrases model class
class Phrase extends Model
{
public function article()
{
return $this->belongsTo(Article::class);
}
}
```
In conclusion, the choice of architecture depends on the specific needs of your project. For simple projects with a limited number of languages, separate columns might suffice. For projects requiring flexibility and scalability, using a separate phrases table is more efficient, despite the complexity of implementation. The JSON column approach can be a middle ground but requires careful handling of query operations.
If you decide to use a separate phrases table for the sake of flexibility and scalability, consider trying the Translatable-pro package from Larazeus. This package is designed for performance, storing phrases in separate database tables to simplify the maintenance of translations across all languages. With just one Composer install, you can seamlessly integrate comprehensive multi-language support into your app, enabling you to create advanced, optimized, and high-performance translatable applications with an efficient database structure which supports filament also.
For more information, visit
https://larazeus.com/translatable-pro | __b9cd1fa82fb7434 |
1,915,999 | Discover Our Bad Bunny Merch Collection on Pinterest! | Pin your favorite Bad Bunny merch by following us on Pinterest! Explore our boards for the latest... | 0 | 2024-07-08T15:18:48 | https://dev.to/badbunnymerch12/discover-our-bad-bunny-merch-collection-on-pinterest-2i6a | badbunnymerch, pinterest, badbunny | Pin your favorite Bad Bunny merch by following us on Pinterest! Explore our boards for the latest trends and styling tips featuring our exclusive collection. Create your own Bad Bunny-inspired mood boards today!
https://www.pinterest.com/badbunny12usa/

| badbunnymerch12 |
1,916,010 | How to Use Line Height in Tailwind CSS | To set line height in Tailwind CSS, you can use the leading utility classes. The leading classes... | 0 | 2024-07-08T15:20:13 | https://larainfo.com/blogs/how-to-use-line-height-in-tailwind-css/ | html, tailwindcss, webdev | To set line height in Tailwind CSS, you can use the leading utility classes. The leading classes control the line height of an element.
```html
<!-- Default line height -->
<p class="leading-normal">This is a paragraph with default line height.</p>
<!-- Loose line height -->
<p class="leading-loose">This is a paragraph with loose line height.</p>
<!-- Tight line height -->
<p class="leading-tight">This is a paragraph with tight line height.</p>
```

You can also use numeric values to set the line height explicitly. The values are based on the font size, so leading-3 would set the line height to 3 times the font size.
```html
<!-- Line height: 1.25 times the font size -->
<p class="leading-5">This is a paragraph with line height of 1.25 times the font size.</p>
<!-- Line height: 1.75 times the font size -->
<p class="leading-7">This is a paragraph with line height of 1.75 times the font size.</p>
```
Here's a list of the available leading utility classes in Tailwind CSS:
- leading-none
- leading-tight
- leading-snug
- leading-normal
- leading-relaxed
- leading-loose
leading-{size} (e.g., leading-3, leading-4, leading-5, etc.)
You Can Set Custom Line Heights with REM Units.
```html
<div class="flex flex-col items-center justify-center h-screen space-y-4">
<p class="max-w-xl leading-[3rem]">So I started to walk into the water. I won't lie to you boys, I was
terrified. But I pressed on, and as I made my way past the breakers a strange calm came over me. I don't
know if it was divine intervention or the kinship of all living things but I tell you Jerry at that moment,
I was a marine biologist.</p>
</div>
```
 | saim_ansari |
1,916,013 | My experience re-certifying in AWS Certified DevOps Engineer - Professional Exam and learning something new | Introduction In the fast-paced world of cloud computing and DevOps, staying abreast of the... | 0 | 2024-07-08T15:20:27 | https://dev.to/aws-builders/my-experience-re-certifying-in-aws-certified-devops-engineer-professional-exam-and-learning-something-new-2m3o | aws, devops, certification, learning | ## Introduction
In the fast-paced world of cloud computing and DevOps, staying abreast of the latest certifications is paramount. Recently, I undertook the challenge of recertifying for the AWS Certified DevOps Engineer - Professional exam. This certification is tailored for seasoned professionals with extensive experience in managing AWS environments, affirming proficiency in deploying and operating distributed applications on the AWS platform.
Articles on how to pass the AWS Certified DevOps Engineer - Professional exam are plentiful, but I always make a point of reviewing the latest ones for any new insights or updates. This year, I'll share my experience recertifying, including new things I didn’t recall from the certification and the aspects I found most interesting this time around.
## Overview of the Certification
The AWS Certified DevOps Engineer - Professional exam focuses on various aspects of DevOps engineering, including continuous delivery (CD) methodologies, automation of security controls, governance processes, and monitoring and logging practices. It is recommended to have prior certifications such as the AWS Certified Developer – Associate and AWS Certified SysOps Administrator – Associate, particularly the SysOps certification as it covers a significant part of the content at a different level.
### Official Resources:
* The exam content outline and passing score, is in the [Exam Guide](https://d1.awsstatic.com/training-and-certification/docs-devops-pro/AWS-Certified-DevOps-Engineer-Professional_Exam-Guide.pdf)
* AWS Skill Builder Resources
* The [Sample Questions](https://explore.skillbuilder.aws/learn/course/external/view/elearning/14673/aws-certified-devops-engineer-professional-official-practice-question-set-dop-c02-english) are 20 questions developed by AWS to demonstrate the style of our certification exams.
* AWS offers various resources on their Skill Builder platform to help you prepare for the exam. There is a free course called [Exam Prep Standard Course](https://explore.skillbuilder.aws/learn/course/external/view/elearning/16352/exam-prep-standard-course-aws-certified-devops-engineer-professional-dop-c02-english) and for those with a subscription, there are additional exam questions and an enhanced version of the preparation course.
### Exam Content Domains
The exam covers six content domains, each with a specific weighting. Below is a breakdown of each domain along with key topics and important points to review:
> Domain 1: SDLC Automation (22%)
> Domain 2: Configuration Management and IaC (17%)
> Domain 3: Resilient Cloud Solutions (15%)
> Domain 4: Monitoring and Logging (15%)
> Domain 5: Incident and Event Response (14%)
> Domain 6: Security and Compliance (17%)
### Detailed Exploration of my Key Learnings
The domains are important to understand the percentage of questions for each topic, but in this case, the difference between the maximum and minimum is 8%, so all domains have more or less the same weight. Normally, I tend to review the services and how to integrate them with each other more than focusing on the domains. Here are some of the notes I took for review or learning, but this will depend a lot on your experience and background in AWS.
Apart from taking notes, it is very useful to see diagrams of integrations or solutions and practice with real scenarios (hands on experience is always the best). I try to complement this with diagrams from AWS documentation or create my own.
* **AWS Developer Tools:** Extensive exploration of AWS CodePipeline, AWS CodeBuild, and AWS CodeCommit.
* AWS CodeArtifact: Understanding How It Works, Integration with External Repositories, and Configuration in a Multi-Account Organization.
* AWS CodeDeploy: Understand the hooks and their appropriate use cases (BeforeInstall, AfterInstall,…). Familiarize yourself with the different deployment types and their impacts. Understand the different deployment strategies.

* **Serverless architectures:** Ways of deployments, when and how to use canary releases. Differences between provisioned concurrency and reserved concurrency with AWS Lambdas. Use AWS Serverless Application Model (AWS SAM)


* Ensure managed EC2 instances have the correct application version and patches installed using **SSM** (Patch Manager, Maintenance Windows, state Manager, Inventory).
* Use **CloudFormation** drift detection to manage configuration changes. How to use different stacks together, different between StackSets and nested stacks, how to deploy instances and how to updated it using its user data and understand the hooks of EC2, ASG and ALB and when use it.


* Use **Auto Scaling with warm pools** for better instance state management.
* Use of Amazon **EventBridge rules for detecting events**, for example with AWS Health Service.
* Know well how **AWS Organizations** works, how it is integrated with other services, how you can delegate the administration of these services to other accounts, how they are defined and what SCPs are used for, and the differences with permission boundaries.


* Set up automatic **remediation** actions using **AWS Config** and AWS Systems Manager Automation runbooks.
* Track **service limits** with Trusted Advisor and set up CloudWatch Alarms for notifications.
Additional Services: Learn the difference of each service what it is used for and how it differs from the rest.
* Amazon **Inspector**: Continuously scan workloads for vulnerabilities.
* Amazon **GuardDuty**: Detect threats and unauthorized activities.
* AWS **Trusted Advisor**: Make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.
* Amazon **Macie**: Automatically discover, classify, and protect sensitive data.

* AWS **Compute Optimizer**: Identify optimal AWS resource configurations.
* AWS **EC2 Image Builder**: simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.

* AWS **Elastic Disaster Recovery**: Minimize downtime and data loss with fast recovery.
* AWS **Resilience Hub**: Define, validate, and track application resilience on AWS.

Although certifications do not serve to validate your knowledge, I always **learn something new** from a service that I have not used or from some functionality that I had not used. For example, using Warm Pools in Amazon EC2 Auto Scaling to decrease latency for applications with long boot times (there is not new it is from 2021) or using the AWS CodeArtifact domain to manage multiple repositories across multiple accounts.

**In conclusion**, the AWS Certified DevOps Engineer - Professional exam not only reinforced my existing skills but also broadened my understanding of AWS services and their real-world applications. Continuous learning is indispensable in navigating the ever-evolving landscape of cloud technology.
And you, what is the latest thing you have learned?
Keep learning! | ysyzygy |
1,916,014 | Simple TikTok 🎬 video using Python | Whether I'm building a little script, a web app, or doing machine learning, Python is always a handy... | 0 | 2024-07-08T15:22:14 | https://dev.to/kwnaidoo/simple-tiktok-video-using-python-i4 | python, programming, tutorial, automation | Whether I'm building a little script, a web app, or doing machine learning, Python is always a handy little language to have in my toolbox.
As is the "Pythonic" way, there's a package for everything and anything in Python, and video is no different.
In this quick guide, I'll show you how to make a simple TikTok short video with just a few lines of code.
## Getting started
To get started we'll need 3 things:
- Install a PIP library: "pip install moviepy"
- Some background music, for this demo I am using: [A soothing Piano soundtrack by Nicholas Panek](https://pixabay.com/music//?utm_source=link-attribution&utm_medium=referral&utm_campaign=music&utm_content=21876)
- An [animated GIF](https://www.canva.com/design/DAGKXkHyyVU/kCp91X6GodtI8-rR8mhhlQ/edit?utm_content=DAGKXkHyyVU&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton) I whipped up in Canva.
## Building our "movie maker" script
First, we need to load our ".gif" file:
```python
from moviepy.editor import VideoFileClip, AudioFileClip
video_clip = VideoFileClip("./background.gif")
```
Next, let's load our audio file:
```python
bg_music = AudioFileClip("./piano.mp3")
```
Finally, we add our sound to the GIF:
```python
video_clip = video_clip.set_audio(bg_music)
video_clip.write_videofile("./video.mp4", codec='libx264', fps=24)
video_clip.close()
bg_music.close()
```
And Voila! You should now have a video file ("video.mp4") in the current directory with our final TikTok short.
Here's the full script:
```python
from moviepy.editor import VideoFileClip, AudioFileClip
video_clip = VideoFileClip("./background.gif")
bg_music = AudioFileClip("./piano.mp3")
video_clip = video_clip.set_audio(bg_music)
video_clip.write_videofile("./video.mp4", codec='libx264', fps=24)
video_clip.close()
bg_music.close()
```
You can learn more about "moviepy" here: [https://pypi.org/project/moviepy/](https://pypi.org/project/moviepy/)
Tip: To shorten the audio so that it fits the video length:
```python
bg_music = AudioFileClip("./piano.mp3") \
.audio_loop(duration=video_clip.duration)
```
| kwnaidoo |
1,916,015 | Dive into the World of Data Science: Free Online Courses to Expand Your Expertise | The article is about a curated collection of 10 free online courses that cover a wide range of data science topics. It provides an overview of the courses, which include comprehensive programs on machine learning, Python for data science, text mining, information retrieval, and more. The article highlights the key features and learning objectives of each course, making it an invaluable resource for both beginners and experienced data enthusiasts looking to expand their expertise. With detailed course descriptions and direct links to the learning materials, this article serves as a one-stop-shop for anyone eager to dive into the exciting world of data science. | 27,985 | 2024-07-08T15:23:39 | https://dev.to/getvm/dive-into-the-world-of-data-science-free-online-courses-to-expand-your-expertise-756 | getvm, programming, freetutorial, collection |
Are you eager to dive into the exciting field of data science? Look no further! We've curated a collection of 10 free online courses that cover a wide range of data science topics, from machine learning and Python to text mining and information retrieval. 🔍

Whether you're a beginner looking to establish a solid foundation or an experienced data enthusiast seeking to expand your skillset, this list has something for everyone. Explore cutting-edge techniques, gain hands-on experience, and unlock your full potential as a data science professional. 💻
## 1. Machine Learning: Comprehensive Foundations from Cornell University
[Machine Learning | Cornell University CS4780/5780](https://getvm.io/tutorials/cs47805780-machine-learning-fall-2013-cornell-university)
Dive into a comprehensive course that covers a broad spectrum of machine learning techniques, including classification, structured models, clustering, and recommender systems. Gain a solid understanding of the theoretical foundations and engage in hands-on experimentation. 🤖
## 2. Python for Data Science, AI, and Development: A Comprehensive Guide
[Python for Data Science, AI & Development | Comprehensive Guide](https://getvm.io/tutorials/python-for-data-science-ai-development)
Discover the power of Python in the realm of data science, artificial intelligence, and software development. This comprehensive guide will teach you the ins and outs of Python, from accessing data to earning a shareable career certificate. 🐍
## 3. The Julia Express: Mastering Data Science and Scientific Computing
[The Julia Express | Data Science, Programming Languages](https://getvm.io/tutorials/the-julia-express)
Explore the versatile Julia programming language and its applications in data science and scientific computing. This comprehensive guide provides a quick introduction, covers essential topics, and lays a solid foundation for both new and experienced users. 🧠
## 4. Information Retrieval: Unlock the Secrets of Data Access and Organization
[Information Retrieval | iTunes - HPI](https://getvm.io/tutorials/information-retrieval-ss-2014-itunes-hpi)
Dive into the world of information retrieval, where you'll learn about the techniques and algorithms that power efficient data access and organization. Recommended for students, researchers, and professionals alike, this course offers a wealth of knowledge through the iTunes U platform. 🔍
## 5. Text Mining and Analytics: Uncover Insights from Unstructured Data
[Text Mining and Analytics | MOOC by ChengXiang Zhai](https://getvm.io/tutorials/mooc-text-mining-and-analytics-by-chengxiang-zhai)
Explore the fascinating field of text mining and analytics, where you'll learn fundamental concepts, practical applications, and real-world case studies. Taught by a renowned expert, this comprehensive course will equip you with the skills to extract valuable insights from unstructured data. 📚
## 6. Introduction to Machine Learning: Foundational Concepts and Applications
[Introduction to Machine Learning | Virginia Tech ECE 5984](https://getvm.io/tutorials/ece-5984-introduction-to-machine-learning-spring-2015-virginia-tech)
Dive into the fundamentals of machine learning, including supervised learning, probability, statistical estimation, and linear models. This course from Virginia Tech combines theoretical knowledge with hands-on exercises and real-world applications, making it a must-explore for aspiring data scientists. 🤖
## 7. Data Mining: Uncover Patterns and Insights in Data
[Data Mining | University of Utah CS 5955/6955](https://getvm.io/tutorials/cs-59556955-data-mining-university-of-utah)
Delve into the world of data mining, where you'll learn efficient algorithms and models for finding patterns in data sets. Covering topics like similarity search, clustering, and link analysis, this course from the University of Utah is suitable for students with basic programming and probability knowledge. 🔍
## 8. Introduction to Data Science in Python: Develop Job-Relevant Skills
[Introduction to Data Science in Python | Python, Data Science](https://getvm.io/tutorials/introduction-to-data-science-in-python)
Gain a solid foundation in data science and develop job-relevant skills through hands-on projects in this intermediate-level course from the University of Michigan. Explore the power of Python and its applications in the data science field. 🐍
## 9. Comprehensive Data Mining Course: Dive Deep with the University of Washington
[Data Mining Course | University of Washington](https://getvm.io/tutorials/csep-546-data-mining-pedro-domingos-sp-2016-university-of-washington)
Immerse yourself in a comprehensive data mining course at the University of Washington, where you'll explore a wide range of techniques and algorithms, taught by an expert in the field. Unlock the secrets of data analysis and uncover valuable insights. 🔍
## 10. Machine Learning Specialization: Foundational AI and ML Concepts
[Machine Learning Specialization | AI, Machine Learning Fundamentals](https://getvm.io/tutorials/machine-learning-specialization)
Embark on a foundational online program on machine learning and AI applications, taught by the renowned Andrew Ng of DeepLearning.AI and Stanford Online. Gain a solid understanding of the core concepts and principles that drive these cutting-edge technologies. 🤖
Dive into this curated collection of data science resources and unlock a world of possibilities. Happy learning! 🎉
## Elevate Your Learning Experience with GetVM
Unlock the full potential of the data science courses featured in this collection by utilizing the GetVM browser extension. GetVM provides an intuitive online Playground environment, allowing you to seamlessly apply the concepts you learn and experiment with hands-on projects.
With GetVM's Playground, you can dive into the course materials and immediately put them into practice. No more tedious setup or configuration – just focus on learning and coding. The Playground environment is pre-configured with all the necessary tools and libraries, empowering you to explore, tinker, and discover without any roadblocks.
Experience the power of learning by doing. The GetVM Playground enhances your understanding by enabling you to actively engage with the course content, test your skills, and validate your knowledge. Boost your confidence and accelerate your progress as a data science enthusiast by leveraging the seamless integration of theory and practice.
Don't just read about data science – immerse yourself in it. Install the GetVM browser extension and unlock a world of interactive learning opportunities. Elevate your data science journey and unlock new levels of understanding and proficiency.
---
## Want to learn more?
- 🚀 Practice the resources on [GetVM](https://getvm.io)
- 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore)
Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) 😄 | getvm |
1,916,016 | The Evolution of User Interface (UI) Design: From Skeuomorphism to Neumorphism | User interface (UI) design has evolved significantly over the past few decades, keeping pace with... | 0 | 2024-07-08T15:24:15 | https://dev.to/codebridge_tech/the-evolution-of-user-interface-ui-design-from-skeuomorphism-to-neumorphism-hl7 | User interface (UI) design has evolved significantly over the past few decades, keeping pace with rapid advances in technology. Early computer interfaces relied on simple text-based commands, while modern UIs feature dynamic graphics, animations, and touch interactions.
This evolution reflects a shift toward making interfaces more intuitive, responsive, and aesthetically pleasing. While early UIs focused mainly on utility, contemporary designs prioritize the overall user experience.
Some key developments in UI design include the rise of graphical user interfaces (GUIs) in the 1980s, which introduced the desktop metaphor of icons and folders. In the 2000s, touchscreen interfaces allowed for direct, gesture-based interactions. More recently, voice and conversational UIs are changing how users interact with technology.
Over the years, UI design styles have gone through various trends, from skeuomorphic designs that mimic real-world objects, to flat and minimalist interfaces. New techniques like neumorphism are also emerging. But despite changes in aesthetics, UI design continues to be shaped by core principles like clarity, consistency, and putting the user first.
This content will provide an overview of the evolution of UI design, exploring influential styles and innovations that have shaped the interfaces we use today. It will also look ahead at where the future of UI design may be headed.
Skeuomorphism
Skeuomorphism refers to a style of user interface (UI) design that replicates objects and textures from the real world. It emerged in the early days of graphical user interfaces (GUIs) as a way to make digital interfaces feel more familiar to users accustomed to physical buttons, knobs, and dials.
Some classic examples of skeuomorphic design include:
- The original Apple iCal calendar app which looked like a leather desk blotter
- Early digital music players with buttons and screens designed to mimic real sound equipment
- Notebook and calendar apps with faux spiral bindings and paper textures
The goal of skeuomorphism is to ease new users into unfamiliar digital environments by retaining design elements they recognize from the real world. However, skeuomorphic elements are purely ornamental and don't add functionality.
Pros:
- Familiar and intuitive for new users
- Creates a sense of realism and depth
- Visually rich textures and details
Cons:
- Ornamentation can get excessive
- Can feel outdated as users become accustomed to digital interfaces
- Overly literal representations sacrifice simplicity and efficiency
- Heavy graphics and textures slow down performance
Skeuomorphism peaked in popularity in the early 2000s and was widely used in Apple's iOS and OS X operating systems. But as users became more digitally savvy, it fell out of favor for sleeker, more minimalist design approaches.
Flat Design
The flat design emerged in the early 2010s as a response to skeuomorphism. It is characterized by flat, minimalist elements, bright colors, and an emphasis on usability over realism.
The principles of flat design include:
- Simplicity - Removing unnecessary elements to focus on content and functionality. The flat design aims for clean, open space and clarity.
- Clarity - Flat design uses bold typography and visual hierarchy to communicate. Icons are simple but clear in meaning.
- Usability - Flat design focuses on usability and the user experience. Interfaces are intuitive, with clear calls to action.
- Minimalism - Removing ornamentation in favor of utilitarianism. Flat design uses restraint in visual elements.
- Bright colors - Vibrant colors and contrast add energy. Gradients are avoided in favor of solid blocks of color.
Examples of flat design interfaces include Windows Phone, iOS 7 and later versions, and Google's Material Design.
The pros of flat design include:
- Clean, minimal aesthetic that focuses attention
- Easier to implement and scale across platforms
- Improved usability and intuitive interfaces
The cons include:
- Can sometimes feel generic or sterile due to lack of distinctiveness
- Overuse can lead to user confusion and accessibility issues
- Lack of visual cues can reduce usability
So in summary, flat design emerged as a response to skeuomorphism, with a focus on simplicity, clarity, and usability. It created cleaner interfaces but sometimes at the expense of personality and distinctiveness.
Material Design
Material Design was created by Google in 2014 as a design language for Android, web, and other digital interfaces. The key principles and goals of Material Design are:
- Provide a unified user experience across platforms
- Mimic paper and ink with digital materials
- Incorporate bold, graphic design with subtle motion and depth effects
- Focus on user actions and making interfaces intuitive
- Emphasize simplicity, clarity, and usability
Some examples of Material Design include the Google apps like Gmail, Maps, YouTube etc. as well as many third-party Android apps.
The pros of Material Design are that it creates clean, bold interfaces focused on usability. The visual metaphors and motion provide clarity. It works well across different devices and form factors.
Some of the cons are that it can seem a bit generic or oversimplified at times. The heavy use of whitespace and cards doesn't appeal to all users. It relies on animation and effects which may impact performance on lower-powered devices. The emphasis on flat colors and icons can reduce accessibility for some users.
Overall, Material Design achieved Google's aims of creating a unified and usable design system. It improved upon skeuomorphism by focusing on usability rather than just visual imitation. However, subsequent styles like neumorphism address some of Material Design's limitations around individuality and accessibility.
Neumorphism
Neumorphism emerged around 2019 as a design trend blending skeuomorphic and flat aesthetics. The name comes from "new" and "skeuomorphism".
Neumorphism features soft, rounded shapes with subtle shadows and highlights to make interface elements appear slightly raised or recessed. It aims to bring a tangible, lifelike quality while retaining a clean and minimalist look.
Some examples of neumorphic design include the iOS Calculator app, which uses raised buttons with glows and shadows. Music apps like Spotify adopted rounded rectangles with gentle highlights and shadows for an understated 3D effect.
Pros:
- Provides visual cues about clickability and depth without excessive ornamentation.
- Cleaner and more minimal than skeuomorphism.
- Friendlier and more approachable than flat design.
Cons:
- Can seem gimmicky or trendy if overused.
- Subtle lighting effects may not translate well on all displays.
- Extra effects can impact performance on lower-powered devices.
Neumorphism strikes a balance between usability and aesthetics by blending skeuomorphism and flat design. It brings in dimensionality and texture without clutter. However, like all design trends, it risks being overused if not implemented thoughtfully.
Accessibility and Inclusiveness
Designing interfaces that are accessible and inclusive should be a top priority for [UI designers](https://www.codebridge.tech/services/ui-ux-design). With a rising global population of users with disabilities and impairments, it is crucial that digital interfaces can be used by anyone regardless of ability.
There are several key principles of inclusive design that designers should follow:
- Perceivable - Users must be able to perceive and understand all interface elements through multiple senses, not just visually. This includes adding text alternatives for images, sufficient color contrast, and clear audio or haptic feedback.
- Operable - The interface must be fully operable through various inputs beyond just a mouse. Keyboard-only users must be able to navigate and use all functions and features.
- Understandable - The interface should be intuitive, logical, and predictable. Instructions should be unambiguous. Complex processes should be broken down into logical steps.
- Robust - The interface should be robust enough to work with assistive technologies like screen readers and voice control. It should also be tested thoroughly across different platforms and devices.
By designing inclusively, UI designers can create experiences that empower and enable the widest possible user base. Rather than treating accessibility as an afterthought or add-on, it should be an integral part of the design process. With some thoughtful consideration, an accessible digital world can become a reality.
The Future of UI Design
User interface design rapidly evolves with emerging technologies and changing user expectations. Here are some key trends shaping the future of UI design:
Emerging Trends and Innovations
- Conversational interfaces and voice UI will become more prevalent as voice assistants like Alexa and Siri improve. Voice provides a natural way to interact and can enable hands-free use cases.
- Augmented reality (AR) and virtual reality (VR) offer new 3D spaces for UI. As AR/VR headsets become more mainstream, new interface paradigms will emerge based on 3D interactions.
- With advances in AI and machine learning, we'll see more anticipatory and dynamic UIs that can automatically adapt to context. For example, a UI could change based on the time of day, user location or task.
Personalization and Contextual Design
- Increasingly, UIs will be personalized and tailored specifically for each user. Factors like user preferences, usage history and environmental context will enable more intelligent and customized experiences.
- UIs will become more contextually aware to serve users better. For example, the UI could automatically adapt when moving between work and personal contexts or switch modes when a user is stationary versus walking.
- As users become more comfortable with biometrics, we could also see UIs that identify users and pull up their settings and preferences automatically based on fingerprints, facial recognition or other biometrics.
The future is bright for UI innovation. As technology evolves, UI designers will continue pushing the boundaries to create more intuitive, immersive and intelligent interfaces. The user experience will only improve as UIs become smarter and more responsive to each person's needs and context.
Principles of Good UI Design
User interface design should follow key principles to create an optimal user experience. Some core principles include:
Clarity
The UI should be clear and understandable, allowing users to accomplish tasks with minimal confusion. This includes using familiar icons, clear labels, and intuitive flows. Ambiguous elements frustrate users.
Consistency
UI elements should be consistent across the interface. Buttons, headers, menus, and other components should maintain the same styling and placement. This reinforces learning and expectations. Inconsistent UIs disorient users.
Responsiveness
The UI must respond to user inputs and actions in real time. Any lag or delay is detrimental. Responsiveness also means adapting layouts for different screen sizes and devices.
Clean Aesthetics
While aesthetics are somewhat subjective, UIs should avoid visual clutter and aim for clean, unobtrusive designs. Too many competing elements overwhelm users. White space, alignment, and prioritizing key content create visual coherence.
Intuitive Navigation
Users should be able to navigate the interface and find information intuitively, without excessive cognitive load. This requires logical information hierarchies, clear menus and links, and easy paths to key pages. Overly complex navigation frustrates users.
Following these core principles creates UIs that are usable, pleasing, and focused on user goals. They establish trust and encourage engagement.
Use Cases for Different UI Styles
When designing a user interface, the context and goals should inform the stylistic choices. Here are some considerations for when different UI styles may be most appropriate:
Skeuomorphic Design
Skeuomorphic interfaces can be helpful when:
- Onboarding new users by leveraging familiar objects and interactions
- Creating interfaces for children or elderly users who may find overly abstract UIs confusing
- Designing interfaces for specialized fields like sound engineering or photography where physical interfaces are common
- Emulating physical objects like books, calendars, or notebooks where texture is part of the experience
Skeuomorphism works well for apps, websites or devices focused on:
- Education
- Creativity and design
- Utilities and productivity
- Gaming, especially casual games
Flat Design
Flat design tends to excel when:
- Screen real estate is limited, like on mobile devices
- A clean, simple aesthetic is desired
- Page loading speed is a priority
- Frequent updates are expected
Flat design is commonly used for:
- Mobile apps
- High-traffic websites focused on usability
- Internal enterprise software with frequent changes
- Minimalist interfaces
Material Design
Material design shines when:
- A branded, visually consistent experience is important across platforms
- Animation and motion can elevate the experience
- The interface needs to work across a variety of device sizes
Material design is popular for:
- Mobile apps, especially Android
- Cross-platform products
- Highly interactive interfaces with gesture navigation
- Brand-driven sites and apps
Neumorphism
Neumorphism is best for:
- Adding subtle depth without heavy skeuomorphism
- Friendly, approachable interfaces
- More artistic/illustrative interfaces
- Experiential websites focused on storytelling
It often appears in:
- Website hero sections and page transitions
- Playful illustrations
- Landing pages and marketing sites
- Concept work and one-off pages
The context should always inform the UI style. Consider the use case, audience, and goals to determine the right stylistic direction.
Conclusion
User interface design has come a long way in the past few decades, evolving hand-in-hand with advances in technology and changing user expectations. Early GUI designs relied heavily on skeuomorphism, mimicking real-world objects to help users understand how to interact with digital interfaces.
As touchscreens became more prevalent, skeuomorphism fell out of favor, replaced by flat and minimalist aesthetics. This opened the door to material design, which focused on creating intuitive interfaces using motion, depth, and animation. More recently, neumorphism has emerged as a trend, softening the hard edges of flat design with subtle shadows and highlights.
Throughout these shifts in style, the fundamentals of good UI design remain constant. Interfaces should be simple, consistent, responsive, accessible, and tailored to users' goals and contexts. Striking the right balance between aesthetics and functionality is key. While visual trends come and go, designing interfaces that empower users should always be the top priority.
Looking to the future, inclusive and ethical design practices will continue gaining prominence. As technology evolves, UI designers must continually reassess how to create the best possible user experiences for all.
| codebridge_tech | |
1,916,017 | Day 2: Error: "NGCC failed to run on entry-point" | Scenario: This error occurs when the Angular Compatibility Compiler (NGCC) fails to run on... | 0 | 2024-07-08T15:24:46 | https://dev.to/dipakahirav/day-2-error-ngcc-failed-to-run-on-entry-point-3gnd | angular, webdev, javascript, help | #### Scenario:
This error occurs when the Angular Compatibility Compiler (NGCC) fails to run on an entry-point during the build process. It typically happens after upgrading Angular or when there are issues with third-party libraries.
please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1
) to support my channel and get more web development tutorials.
#### Solution:
1.**Run NGCC Manually:**
Try running NGCC manually to see more detailed errors:
```sh
ngcc
```
2.**Clean and Reinstall Node Modules:**
Clean the npm cache and reinstall the node modules:
```sh
npm cache clean --force
rm -rf node_modules
npm install
```
3.**Delete Angular Compiler Cache:**
Sometimes the Angular compiler cache can cause issues. Delete the cache directory:
```sh
rm -rf ./node_modules/.ngcc
```
4.**Update Angular Packages:**
Ensure all Angular packages are updated to compatible versions:
```sh
ng update @angular/core @angular/cli
```
5.**Check for Third-Party Library Issues:**
If the issue persists, it might be caused by a third-party library. Check the compatibility of third-party libraries with your Angular version. You might need to update or downgrade specific packages.
6.**Configure `ngcc.config.js`:**
If a specific package is causing the issue, you can configure NGCC to skip it by adding a `ngcc.config.js` file to the root of your project:
```js
module.exports = {
packages: {
'problematic-package': {
entryPoints: {
'./': { ignore: true }
}
}
}
};
```
7.**Rebuild the Project:**
After performing the above steps, rebuild your project:
```sh
ng build
```
If you follow these steps and still encounter the error, it may be beneficial to look for any open issues related to NGCC on the Angular GitHub repository or the specific third-party library's repository.
Feel free to ask for another error and its solution tomorrow!
please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1
) to support my channel and get more web development tutorials.
### Follow and Subscribe:
- **Instagram**: [devdivewithdipak](https://www.instagram.com/devdivewithdipak)
- **Website**: [Dipak Ahirav] (https://www.dipakahirav.com)
- **Email**: dipaksahirav@gmail.com
- **YouTube**: [devDive with Dipak](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1
)
- **LinkedIn**: [Dipak Ahirav](https://www.linkedin.com/in/dipak-ahirav-606bba128)
Happy coding! 🚀 | dipakahirav |
1,916,018 | AI in Software Testing: Wins and Risks of Artificial Intelligence in QA | AI in QA is a topic you can cover once a week and still miss some novelties. A year ago, we released... | 0 | 2024-07-08T15:27:19 | https://testfort.com/blog/ai-in-software-testing-a-silver-bullet-or-a-threat-to-the-profession | qa, testing, qateam, development | AI in QA is a topic you can cover once a week and still miss some novelties. A year ago, we released an article on what ChatGPT can do for software test automation, and it seemed like a big deal.
Now, AI for software testing is a separate business, tech, and expert territory, with serious players, loud failures, and, all in all, big promise.
Let’s talk about
- Latest stats of AI QA testing (not everything is so bright there, by the way);
- How AI is already used to improve software quality;
- How AI and machine learning will/may be used to optimize testing;
- How to use the power of AI in testing software and reduce risks along the way.
Just to take it off the chest, we have just partnered with Viruoso AI, a top-of-the-game company that uses AI automation testing tools. It means two things:
- We are excited enough to mention it about 10 times in one article;
- We write about AI testing services from experience. We use AI to automate tests, we incorporate AI in planning manual test roadmaps, and we know how exactly tools can help software testers in the upcoming 12-18 months. We don’t plan further; this AI market advancement is crazy.
#Where Do We Stand with AI Software Testing in 2024
Numbers are forgettable and often boring without context.
But they help you to see the trends, especially when they come from reliable sources. When ISTBQ, Gartner, or British Department for Science, Innovation and Technology (DSIT) cover the impact and the future of software testing with AI — you take notice.
So we give just a few numbers summarized from few research results and surveys to help you realise one thing — traditional software testing industry is living its last years.
#Industry Insights and Statistics
- AI-driven testing can increase test coverage by up to 85%;
Organizations using AI-driven testing reported 30% reduction in testing costs and 25% increase in testing efficiency;
- By 2025, 50% of all new software development projects will include AI-powered testing tools;
- 47% of current AI users had no specific cyber security practices in place for AI (not everything is so shiny, right?).
Real-World Proved Benefits of AI Testing
Just a brief case to show how artificial intelligence testing tools can help at any stage of QA process. They are not required for testing, true. But maybe they already should be.
We worked with a company offering to create consoles for their clients. User interface testing is paramount for such companies, but not only that. When we entered the project, we realized there were problems with bug triage, test coverage, bug report creation, requirements testing, and report creation. Using AI in software testing was new to us, but we decided to try and never regretted it. Check the numbers.
**Bug Triage**
- Problem. Duplicated issues and inefficiencies in assigning bugs due to multiple authors logging defects.
- Solution. Implemented DeepTriage to automate and streamline the bug triage process.
- Results. 80% decrease in analysis time and bug report creation.
**Test Coverage**
- Problem. Limited documentation time, predominantly covering only positive scenarios.
- Solution. Used ChatGPT to generate comprehensive test cases from requirements, ensuring better coverage.
- Results. 80% faster test case creation and a 40% increase in edge case coverage.
**Bug Reports Creation**
- Problem. Customer feedback needed conversion into a formal bug report format.
- Solution. Used ChatGPT to analyze and structure customer reviews into detailed bug reports.
- Results. 90% reduction in detectability and improved communication of issues.
**Requirements Testing**
- Problem. Need for structured user stories and consistent software requirements.
- Solution. Applied ChatGPT and Grammarly to analyze, restructure, and ensure consistency in software requirements.
- Results. 500% reduction in requirement testing time and a 50% increase in spelling mistake corrections.
**Report Creation**
- Problem. Time-consuming data integration from various sources during regression testing.
- Solution. Utilized Microsoft Power BI for efficient data integration and AI-driven insights.
- Results. 30% improvement in data representation and a 50% reduction in report creation time.
Our experience in implementing AI in software testing have skyrocketed since then, but it was a great start that allows us to truly believe in benefits of using AI in small and eneterprise-level projects.
https://testfort.com/wp-content/uploads/2023/04/2-AI-in-Software-Testing.png
#How AI Can Be Used to Improve Software Testing
Even the best manual testers are limited by time and scope. AI is changing that. With machine learning and predictive analytics, AI enhances traditional manual testing processes. From test planning to execution, AI-driven tools bring precision and efficiency, making manual testing smarter and more effective.
Importantly, AI doesn’t eliminate the need for human testers; it helps them work more efficiently and focus on complex issues.
#Test Planning and Design
Test case generation allows to analyze historical data and user stories to generate comprehensive test cases. AI is used to increase the overall coverage of the testing process (yes, large number of tests doesn’t necessarily means quality, but we still rely on human intelligence to filter trash out).
Risk-based testing relies on machine learning algorithms to prioritize test cases based on potential risk and impact.
Defect prediction is based on using AI and ML predictive models to identify areas of the application most likely to contain defects.
#Test Execution and Management
Test data management will be easier with automating the creation and maintenance of test data sets using AI-driven tools.
Test environment optimization uses AI systems to manage and optimize test environments, ensuring they are representative of production.
Visual Testing is all about employing AI-powered visual validation tools (like Vision AI) to detect UI anomalies that human testers might miss.
#Collaboration and Reporting
AI-powered reporting allows generation of detailed and actionable test reports with insights and recommendations using natural language processing.
Collaboration tools cover integrating AI with collaborative tools to streamline communication between testers, developers, and other stakeholders.
And now, to the most exciting part. End-to-end automated testing done right with AI-based test automation tools. It’s a mouthful, but it is exactly what you need to be thinking about it 2024.
#Artificial Intelligence in Software Test Automation
Integrating AI into software testing helps get the most from automation testing frameworks. Right now, there is hardly an automated test scenario that cannot be somehow enhanced with tools for AI QA.
#Self-Healing Scripts
Self-healing scripts use AI algorithms to automatically detect and adapt to changes in the application under test, reducing the need for manual script maintenance.
Dynamic element handling allows AI to recognize UI elements even if their attributes change, ensuring tests continue to run smoothly. As UI testing becomes essential to any minor and major launch, AI can assist immensely.
#Intelligent Test Case Prioritization
Risk-based prioritization relies on AI to analyze code changes, recent defects, and user behavior to dynamically prioritize test cases.
Optimized testing ensures critical paths are tested first, improving overall test efficiency.
#AI-Driven Regression Testing
Automated selection uses AI tools to automatically select relevant regression test cases based on code changes and historical test results.
Efficient execution speeds up the regression testing process, allowing for faster feedback and quicker releases.
#Continuous Integration and Continuous Delivery (CI/CD)
Automated code analysis employs AI tools to perform static and dynamic code analysis, identifying potential issues early in the development cycle.
AI-powered deployment verification involves using AI to verify deployments by automatically executing relevant test cases and analyzing results.
Performance testing leverages AI to simulate user behavior and load conditions, identifying performance bottlenecks and scalability issues.
#AI in Test Maintenance and Evolution
Adaptive test case generation uses AI to continuously generate and evolve test cases based on application usage data and user feedback.
Predictive maintenance applies machine learning to predict and address test script failures before they impact the CI/CD pipeline.
Automated test refactoring utilizes AI to refactor test scripts, ensuring they remain effective and efficient as the application evolves.
#Continuous Testing
Seamless integration ensures AI integrates with CI/CD pipelines, enabling continuous testing and faster feedback.
Real-time insights provided by AI offer immediate feedback on testing results, helping teams make informed decisions quickly.
By incorporating AI into automated testing, teams can achieve higher efficiency, better test coverage, and faster time-to-market. AI-driven tools make automated testing smarter, more reliable, and more adaptable to the ever-changing software landscape.
As you can see AI in software testing takes many forms: generative AI for test scripts, natural language processing for, vision and even audio processing, machine learning, data science, etc. These are all mixed. The good news is that testing using artificial intelligence doesn't require you to have deep understanding of algorythms, and tech, and types of ML learning. You just need to choose the right AI testing tools... and not fall for the lies.
#AI Tools: Optimize Testing but Don’t Believe Everything They Promise
We’ve been in the market for AI tools for over a year, searching for a partner that truly enhances our automated testing on both front and back ends. Many tools we encountered used AI as a buzzword without offering real value. It was frustrating to see flashy promises without substance.
Then we found Virtuoso AI. It stood out from the rest.
> “With Virtuoso, our trained professionals create test suites effortlessly. These are structured logically, maintaining reusability and being user-centric. Once we establish a baseline, maintaining test suites becomes straightforward, even as new releases come in. Regression suites run quickly and efficiently.”
> Bruce Mason, UK and Delivery Director
**Key areas of Virtuoso’s AI product include**
Codeless automation. We can set up tests just by describing what they need to do. No coding necessary, which means quicker setup and easier changes.
Functional UI and end-to-end testing. It covers everything from button clicks to complete user flows. This ensures your app works well in real-world scenarios, not just in theory.
AI and ML integration. AI learns from your tests. It gets smarter over time, improving test accuracy and reducing manual adjustments.
Cross-browser testing and API integration. With this tool we can test how your app works across different web browsers and integrates API checks. This means thorough testing in diverse environments – a must for consistent user experience.
**Other AI Tools for Testing**
Besides Virtuoso AI, here are a few other notable artificial intelligence software testing tools available on the market:
- Applitools. Specializes in visual AI testing, offering tools for automated visual validation and visual UI testing.
- Testim. Uses machine learning to speed up the creation, execution, and maintenance of automated tests.
- Mabl. Provides an AI-driven testing platform that integrates with CI/CD pipelines, focusing on end-to-end testing.
- Functionize: Combines natural language processing and machine learning to create and maintain test cases with minimal human intervention.
- Sealights: Focuses on quality analytics and continuous testing, offering insights into test coverage and potential risk areas.
When evaluating these tools and testing activities they cover, remember to check their true AI capabilities, scalability, integration, and support systems to ensure they meet your needs.
But let’s not ignore the broader market. There are many AI tools available, each with its own strengths and weaknesses. Here’s what to consider when evaluating them:
- True AI capabilities. Look beyond the buzzwords. Ensure the tool offers genuine AI-driven features, not just automated scripts rebranded as AI.
- Scalability. Can the tool handle large-scale projects? It should adapt to your growing needs without performance issues.
- Integration. Check how well the tool integrates with your existing systems and workflows. Seamless integration is crucial for efficiency.
- Support and Community. A strong support system and an active user community can make a significant difference. Look for tools with responsive support teams and extensive documentation.
Choosing the right AI tool for testing is critical. It’s easy to get caught up in marketing hype. Stay focused on what truly matters: real, impactful features that improve your testing process. Our experience with Virtuoso has been positive, but it’s essential to do your own research and find the best fit for your needs.
In summary, while AI tools can optimize testing, be cautious and discerning. Not all tools deliver on their promises. Seek out those that offer genuine innovation and practical benefits.
What are the Disadvantages of AI in Software Testing?
If you feel like the previous part confirms that you may be out of work… soon, don’t sell yourself short, at least for now. Here are the limitations AI has and will have for a considerable amount of time.
1) Lacks creativity. AI for software testing algorithms experience big problems generating test cases that consider edge cases or unexpected scenarios. They need help with inconsistencies and corner situations.
2) Depends on training data. Don’t forget — artificial intelligence is nothing else but an algorithm, a mathematical model being fed data to operate. It is not a force of nature or a subject for natural development. Thus, the quality of test cases generated by AI depends on the quality of the data used to train the algorithms, which can be limited or biased.
3) Needs “perfect conditions.” I bet you’ve been there — the project documentation is next to none, use cases are vague and unrealistic, and you just squeeze information out of your client. AI can’t do that. The quality of its work will be exactly as good or bad as the quality of the input and context turned into quantifiable data. Do you receive lots of that at the beginning of your QA projects?
4) Has limited understanding of the software. We tend to bestow superpowers on AI and its understanding of the world. In fact, it is truly very limited for now. May not have a deep understanding of the software being tested, which could result in missing important scenarios or defects.
5) Requires skilled professionals to operate. For example, integrating a testing strategy with AI-powered CI/CD pipelines can be complex to set up, maintain, and troubleshoot, as it requires advanced technical skills and knowledge. Tried and true methods we use now may, for years, stay much cheaper and easier to maintain.
#How AI-Based Software Testing Threatens Users and Your Business
There is a difference between what AI can’t do well and what can go wrong even if it does its job perfectly. Let’s dig into the threats related to testing artificial intelligence can take over.
- Bias in prioritization and lack of transparency. It is increasingly difficult to comprehend how algorithms are making prioritization decisions, which makes it difficult to ensure that tests are being prioritized in an ethical and fair manner. Biases can influence artificial intelligence models/tools in the data used to train them, which could result in skewed test prioritization.
Example. Suppose the training data contains a bias, such as a disproportionate number of test cases from a particular demographic group. In that case, the algorithm may prioritize tests in a way that unfairly favors or disadvantages certain groups. For example, the training data contains more test cases from men than women. The AI tool may assume that men are the primary users of the software and women are secondary users. This could result in unfair or discriminatory prioritization of tests, which could negatively impact the quality of the software for underrepresented groups.
- Overreliance on artificial intelligence in software testing. Lack of human decision-making reduces creativity in testing approaches, pushes edge cases aside, and, in the end, may cause more harm than good. Lack of human oversight can result in incorrect test results and missed bugs. Increased human oversight may lead to maintenance overheads.
Example. If the team relies solely on AI-powered test automation tools, they may miss important defects that could have significant impacts on the software’s functionality and user experience. The human eye catches inconsistencies using the entire background of using similar solutions. Artificial intelligence only relies on limited data and mathematical models. The more advanced this tech gets, the more difficult it is to check the results’ validity, and the riskier is overreliance. This overreliance can lead to a false sense of security and result in software releases with unanticipated defects and issues.
- Data security-related risks. Test data often contains sensitive personal, confidential, and proprietary information. Using AI for test data management may increase the risk of data breaches or privacy violations.
Example. Amazon changed the rules it’s coders and testers should follow when using AI-generated prompts because of the alleged data security breach. It is stipulated that ChatGPT has responded in a way suggesting it had access to internal Amazon data and shared it with users worldwide upon request.
#So, What Will Happen to AI in Testing?
What is the future of software testing with AI?
We don’t know.
You don’t know.
Our partners at Virtuoso AI don’t know.
We can guess the general direction —
- Manual testers will get more into prompting and generate test scripts that will allow more coverage with fewer motions;
Expert manual testers will also be more valued for human touch and human eye checking after AI testing tools;
- Test automation frameworks will be almost 100% driven by AI;
- Continuous testing will become more affordable than ever;
- “We needa large number of test cases” trend will be overrun by priritization in testing and monitoring;
- soon there will be tools for almost any testing needs, but only the most efficient and affordable solutions will survive the competition.
AI is transforming how we do software development and testing.
AI is transforming how we do software development and testing.
If you are a manual QA beginner — you better hurry and invest in your skills. The less expert and the easier to automate tasks you do now, the faster algorithms will come after your job. In the end, here is what Chat GPT thinks of it:
In our company, we started to apply AI-based tools for test automation back in 2022 and continue adopting new tech with new partners — Virtuoso, Google, Amazon, etc.
Will it be enough to stay relevant and efficient?
We definitely hope so. AI can help, but the software testing process is much more complex than just applying new tricks.
| testfort_inc |
1,916,019 | 10 Tips for Building a Cyber Resilience Strategy | In today's digital landscape, cybercrimes pose significant threats to businesses, resulting in... | 0 | 2024-07-08T15:31:20 | https://www.clouddefense.ai/build-a-cyber-resilience-strategy/ |

In today's digital landscape, cybercrimes pose significant threats to businesses, resulting in halted operations and substantial financial losses. Despite heavy investments in cybersecurity, many organizations still fall victim to cyberattacks. Therefore, building a cyber resilience strategy is crucial to safeguard business operations and mitigate risks. This comprehensive approach not only prevents attacks but also ensures swift recovery from breaches.
###What is a Cyber Resilience Strategy?
A cyber resilience strategy is a structured plan that helps organizations discover, respond to, and recover from cyberattacks. It combines cybersecurity measures with resilience, ensuring that businesses can adapt to evolving threats and minimize harm. Unlike traditional cybersecurity, which focuses on preventing attacks, cyber resilience accepts the possibility of breaches and prepares for rapid response and recovery.
###Key Components of a Cyber Resilience Strategy
A cyber resilience strategy involves identifying and protecting critical assets, including infrastructure, systems, services, and data. Developing a comprehensive incident response plan is crucial, outlining roles, recovery processes, and communication strategies. Limiting access to sensitive data to authorized users and monitoring user behavior are essential steps. Regularly training employees on cybersecurity practices and threat identification enhances the strategy's effectiveness. Continuous improvement is vital to adapt to the evolving cybersecurity landscape, ensuring the organization remains resilient against new threats.
###Common Cyber Resilience Threats
Cyber resilience threats are varied and evolving, and organizations must be aware of them. Cybercriminals pose constant threats through DDoS attacks, malware, and ransomware. Human errors within the organization can lead to data loss and operational disruptions. The absence of a documented incident response plan increases the impact of attacks. A lack of reliable backup systems jeopardizes recovery efforts. Natural disasters, such as earthquakes and floods, can also affect IT infrastructure and operations, posing significant threats to cyber resilience.
###The Benefits of a Cyber Resilience Strategy
Building a cyber resilience strategy offers numerous benefits. It reduces operational downtime, minimizes financial losses, and strengthens overall cybersecurity, helping to discover vulnerabilities. Ensuring adherence to regulatory requirements like GDPR, HIPAA, and CCPA becomes more manageable. A robust strategy maintains trust and demonstrates a commitment to digital asset protection, positioning the organization as a trusted and reliable entity. Continuous improvement keeps the organization ahead of new cyber threats, maintaining a competitive edge.
###10 Tips for Building a Cyber Resilience Strategy
**1. Create an Incident Response Plan:** Document procedures for detecting and recovering from cyber incidents.
**2. Emphasize Employee Training:** Regularly train employees on cybersecurity scenarios and best practices.
**3. Conduct Regular Testing:** Evaluate security controls and methodologies through audits and penetration testing.
**4. Assess the Overall Security Posture:** Regularly review the organization's security posture to identify and remediate vulnerabilities.
**5. Enforce Data Protection and Encryption:** Implement encryption and data protection measures to safeguard sensitive information.
**6. Implement Collaborative Efforts:** Partner with experts to enhance cyber resilience capabilities.
**7. Employ Continuous Monitoring:** Use real-time monitoring tools to detect and address vulnerabilities.
**8. Build a Proper Recovery Strategy:** Develop a robust backup and restoration plan.
**9. Continuously Improve:** Regularly update and enhance the cyber resilience strategy.
**10. Stay Updated with the Latest Threats:** Keep informed about the latest cybersecurity threats and trends.
###Conclusion
Building a cyber resilience strategy requires a strategic approach, integrating both prevention and recovery measures. By following these 10 tips, organizations can develop a practical and effective strategy, ensuring they are well-prepared to face and mitigate cyber threats. | clouddefenseai | |
1,916,020 | The Importance of Community for Open Source 🌱💚🌍 | If you have been following me for a little while, you know by now that I love Open Source and I love... | 0 | 2024-07-08T17:00:00 | https://dev.to/pachicodes/the-importance-of-community-for-open-source-5ak | opensource, community, beginners, github |
If you have been following me for a little while, you know by now that I love Open Source and I love community. Open Source projects stand out for their collaborative nature and community-driven approach; these two topics go hand in hand. However, sometimes I feel like people still don't understand how important community is to Open Source.
🌱🌱🌱
The success of these projects relies not just on individual contributions but on the vibrant communities that form around them. This article explores why community is the cornerstone of Open Source projects, driving innovation, learning, and support.
---
## Collaboration and Innovation
**Open Source projects are unique ecosystems** where developers from around the world bring their diverse perspectives to solve problems and build together. Collaboration is the lifeblood of these projects, as it allows for pooling of resources, sharing of ideas, and rapid iteration. For example, projects like Linux and Python have thrived due to their inclusive communities that embrace contributions from anyone with the skill and will to participate.
---
## Learning and Growth
Being part of an Open Source community is an educational journey. Developers get to interact with more experienced contributors, gaining insights that no tutorial can provide. This hands-on experience accelerates learning and helps developers enhance their skills in real-world settings. Some communities also hold workshops, webinars, and mentorship programs, further supporting professional growth.
---
## Support and Mentorship
For newcomers, Open Source communities are invaluable resources. Good communities offer guidance on both technical issues and career advice, helping individuals navigate the complexities of both coding and professional development. Mentorship, whether formal or informal, is usually available, making these communities nurturing spaces for new programmers.
---
🌟 **Support Open Source Innovation** 🌟
I am happy to work for an Open Source community, that is build a Ecossystem of Plugins for the JavaScript community. Your can support and encourage us to continue developing resources and fostering a vibrant community with only one click:
👉 [Star Webcrumbs on GitHub](https://github.com/webcrumbs-community/webcrumbs) ⭐
---
## Making a Difference
Contributing to an Open Source project can be incredibly rewarding. Not only do you get to influence tools and technologies used globally, but you also make tangible contributions that reflect back on your own professional credibility. Every code commit, documentation update, or bug report helps improve the project and is a learning opportunity.
---
## Get Involved Today!
**There’s no better time than now to join an Open Source community.** Being seasoned developer or a code newbie, your unique skills and perspectives are valuable. Start by exploring projects that align with your interests or professional goals, participate in discussions, and make your first contributions.
The spirit of Open Source is all about collaboration and shared progress.
🌱🌱🌱
By joining an Open Source community, you not only contribute to the technological landscape but also grow as a developer and make lasting connections.
So dive in, the Open Source world awaits your contribution.
---
### 🌟 **Let's Build Together!** 🌟
If you believe in the power of Open Source and community-driven development, [join us at WebCrumbs Community](https://discord.gg/4PWXpPd8HQ). We are a group of developer building some cool tools for the JavaScript community!
Star us on GitHub to stay connected and contribute to our growing ecosystem of tools and resources.
👉 [Star Webcrumbs on GitHub](https://github.com/webcrumbs-community/webcrumbs)
**Thanks for reading**
Pachi 💚 | pachicodes |
1,916,021 | How to Protect Your Application from AI Bots | Bots have traditionally been something we try to prevent from entering our applications. There were,... | 0 | 2024-07-08T15:42:51 | https://www.permit.io/blog/introduce-ai-bots-in-applications | programming, security, webdev, tutorial | ---
title: How to Protect Your Application from AI Bots
published: true
date: 2024-07-08 14:00:00 UTC
tags: programming, security, webdev, tutorial
canonical_url: https://www.permit.io/blog/introduce-ai-bots-in-applications
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xdd07g49nm5eatswvy9c.jpg
---
Bots have traditionally been something we try to prevent from entering our applications. There were, of course, always good (or at least useful) bots as well, such as search engines and test automation scripts, but those were always less common compared to malicious ones.
In the past year, since LLMs and GPT have become a major part of our lives, we’ve found ourselves with a whole new set of bots that don’t really fit the categories previously known to us.
These machine identities are welcomed into our applications by users who want to get some help from GenAI, but that doesn’t change the fact that they should be a concern for us as developers who are trying to protect our data and users.
These new entities can be harmful from many security and privacy perspectives, and we need to know how to handle them.
In this blog, we will evaluate the presence of GenAI identities in our applications and show how to create a smart layer of fine-grained authorization that allows us to integrate bots securely into our applications. To do that, we’ll use two tools: [Permit.io](http://permit.io/ "http://permit.io/") and [Arcjet](https://www.arcjet.com/ "https://www.arcjet.com/"), to model and enforce these security rules in our applications.
## Rise of The Machines
We’ve always had a very clear distinction between human and machine identities. The methods used to authenticate, authorize, audit, secure, and manage those identities have been very different. In security, for example, we tried to strengthen human identities against **human error** (i.e. Phishing, Password Theft), while with machine identities, we mostly dealt with **misconfiguration problems**.
The rise of GenAI usage has brought new types of hybrid identities that potentially can suffer from problems in both worlds. These new hybrid identities consist of machine identities that adopt human attributes and human identities that adopt machine identities.
From the machine side, take Grammarly (or any other grammar AI application) as an example. Grammarly is a piece of software that is supposed to be identified as a machine identity. The problem? It’s installed by the application user without notifying the developers of a new type of machine identity in the system. It gets access to tons of private data and (potentially) can perform actions without letting the developers deal with the usual weaknesses of machine identities.
On the other hand, from the human identity side, GenAI tools give human identities the possibility of adopting machine power without having the “skills” to use it. Even a conventional subreddit like [r/prompthacking](https://www.reddit.com/r/prompthacking/ "https://www.reddit.com/r/prompthacking/") provides people with prompts that can help them use data in a way that no one estimates a human identity can perform. It's a common practice to have less validation and sanitization on API inputs than in the UI, but if users can access those APIs via bots, they can abuse our trust in the API standard.
These new hybrid identities force us to rethink the way we are handling identity security.
## Ranking Over Detecting
One of the questions we need to re-ask when securing our applications for such hybrid identities is ‘ **Who is the user?’**. Traditionally, that was an authentication-only question. The general authorization question of ‘ **What can this user do?’** fully trusts the answer of the authentication phase. With the new hybrid identities that can easily use a non-native method to authenticate, trusting this binary answer from the authentication service can lead to permission flaws. This means we must find a new way to ask the “Who” question on the authorization side.
In order to help the authorization service better answer this detailed "Who" question, we need to find a proper method to rank identities by multiple factors. This ranking system will help us make better decisions instead of using a traditional `good`/`malicious` user ranking.
When considering the aspects of building such a ranking system, the first building block is to have a method to rank our type of identity somewhere between machine and human. In this article, we will use [**Arcjet**](https://www.arcjet.com/ "https://www.arcjet.com/") **,** an open-source startup that builds application security tools for developers and, more specifically, their ranking system. The following are the different ranks of identities between good and malicious bots, which could help us understand the first aspect of the new `Who` question.
- `NOT_ANALYZED` - Request analysis failed; might lack sufficient data. Score: 0.
- `AUTOMATED` - Confirmed bot-originated request. Score: 1.
- `LIKELY_AUTOMATED` - Possible bot activity with varying certainty, scored 2-29.
- `LIKELY_NOT_A_BOT` - Likely human-originated request, scored 30-99.
- `VERIFIED_BOT` - Confirmed beneficial bot. Score: 100.
As you can see, each rank level also has a detailed score, so we can detect even more for the levels we want to configure.
Assuming we can rank our user activity in real-time, we will have the first aspect required to answer the question of “Who is the user?” when we make the decision of “What can they do?” Once we have this ranking, we can also add more dimensions to this decision, making it even more sophisticated.
## Conditions, Ownership, and Relationships
While the first aspect considers only the user and its behavior, the others will help us understand the sensitivity of the particular operation that a user is trying to perform on a resource. Let’s examine three methods that can help us build a comprehensive policy system for ranking hybrid identity operations.
- **Conditions** : With this method, we use the context of the authorization decisions and the user's bot rank score to build conditions that will help us get better control of these identity operations.
For example, if we want to keep ourselves safe from an injection of vulnerable data by applications that were accessed by a human user, we can use the bot's level plus the characters in the request to verify the level of danger.
- **Ownership** : With this method, we add the level of ownership to the decision data to give it another dimension.
For example, we can decide that a user can let high-confidence bots modify their own data but not perform changes or read others’ data, while bots with lower ranking do not get access at all.
- **Relationship** : With this method, we can use the relationship between entities to create another trust dimension in the user's rank.
For example, if we have a folder owned by a user and files in this folder owned by others, we can let the good bot perform operations on the folder but not the files.
Thinking of our new ranking system as multidimensional ranking can even help us mix and match our ranking systems for a fine-grained answer to the ” **What** ” question that takes the “ **Who** ” question into account.
Let’s model such a simple ranking system in the [Permit.io](http://Permit.io "http://Permit.io") fine-grained authorization policy editor.
## Modeling a Bot Ranking System
While [Permit.io](http://Permit.io "http://Permit.io") allows you to build a comprehensive multi-dimensional system with all the dimensions we mentioned, for the sake of simplicity in this tutorial, we will build a system based only on _conditions_ and _ownership_.
For the model, we will use an easily understandable data model of a CMS system, where we have content items with four levels of built-in access: Public, Shared, Private, and Blocked.
- Public documents are documents that are owned by the user and shared publicly
- Shared documents are documents that are owned by others and shared publicly
- Private documents are documents that are owned by the user and marked as private
- Blocked documents are documents that are owned by others and marked as private
First, we would like to create segments for potential hybrid identities in our application. We will inherit Arcjet's direct model and configure five user sets that correlate to these five types of bots.
1. Login to Permit at [app.permit.io](http://app.permit.io "http://app.permit.io") (and create a workspace if you haven’t)
2. Go to Policy → Resources and create an Item resource with the following configuration

3. Now, we want to add a user attribute that allows for smart bot detection. Go to the Directory page click the `Settings` button on the top-right corner of the page, and choose `User Attributes` on the sidebar. Add the following attributes

4. Go back to the Policy screen, and navigate to ABAC Rules. Enable the ABAC options toggle, and click Create New on the User Set table. Create the following four user sets.

If we go back to the Policy screen now, we can see that our different user sets allow us to use the first dimensions of protection on our application.

Let’s now create the ranking on the other dimension with conditions.
1. Go to ABAC Rules → Create Resource Set and create four resource sets, one per item type

2. Go back to the Policy screen and ensure the following configuration

As you can easily see, we configured the following rules in the table:
- Human users can read Public, Shared, and Private data
- Trusted bots are allowed to read Public and Shared data
- Untrusted bots can read only the Public
- Evil bots (bots that try to hide their automated identity) are blocked for all kinds of data
This configuration actually stretched the “static” question of who to a dynamic multi-dimensional decision. With the policy model in mind, let’s run the project and see the dynamic enforcement in action.
## Running a Demo
In the following Node.js project, we utilized a very simple application where we saved three content items, one for every category we defined. From the user perspective, we haven’t implemented authentication but are using a hardcoded authentication token with the user ID on it.
All the code is [available here](https://github.com/permitio/fine-grained-bot-protection/tree/main "https://github.com/permitio/fine-grained-bot-protection/tree/main") - [https://github.com/permitio/fine-grained-bot-protection](https://github.com/permitio/fine-grained-bot-protection)
> To run the project locally in your environment, please follow the steps described in the [project’s Readme.md file](https://github.com/permitio/fine-grained-bot-protection/blob/main/README.md "https://github.com/permitio/fine-grained-bot-protection/blob/main/README.md")
Let’s take a short look at the code to understand the basics of our authorization enforcement.
First, we have our endpoint to read documents from our “in-memory” database. This endpoint is fairly simple: It takes a hardcoded list of items, filters it by authorization level, and returns them to the user as plain text.
```js
const ITEMS = [
{ id: 1, name: "Public Item", owner: USER, private: false, },
{ id: 2, name: "Shared Item", owner: OTHER_USER, private: false, },
{ id: 3, name: "Private Item", owner: USER, private: true, },
{ id: 4, name: "Blocked Item", owner: OTHER_USER, private: true, },
];
app.get("/", async (req, res, next) => {
const items = await authorizeList(req, ITEMS);
res
.type("text/plain")
.send(items.map(({ id, name }) => `${id}: ${name}`).join("\\r\\n"));
});
```
The authorize function that we are using here enforces permissions in two steps. First, it checks the bot rank of the particular request, and then it calls the `check` function on Permit with the real-time data of the Arcjet bot rank and the content item ID. With this context, the Permit’s PDP will return the relevant function.
```js
const authorizeList = async (req, list) => {
// Get the bot detection decision from ArcJet
const decision = await aj.protect(req);
const isBot = decision.results.find((r) => r.reason.isBot());
const {
reason: { botType = false },
} = isBot;
// Check authorization for each item in the list
// For each user, we will add the botType attribute
// For each resource, we will add the item attributes
const authorizationFilter = await permit.bulkCheck(
list.map((item) => ({
user: {
key: USER,
attributes: {
botType,
},
},
action: "read",
resource: {
type: "Content_Item",
attributes: {
...item,
},
},
}))
);
return list.filter((item, index) => authorizationFilter[index]);
};
```
To test the application, you can use your local endpoint or use our deployed version at: [https://fga-bot.up.railway.app/](https://fga-bot.up.railway.app/ "https://fga-bot.up.railway.app/")
1. Visit the [application from the browser](https://fga-bot.up.railway.app/ "https://fga-bot.up.railway.app/"), you can read Public, Private, and Shared the items

2. Try to run the following curl call from your terminal. Since this is a bot that does not try to pretend it isn't a bot, it will get the result as a trusted bot.
```
curl <https://fga-bot.up.railway.app/>
```

3. Now, let's try to run a bot from a cloud-hosted machine that pretends to be human by its user agent. To run the bot, access the following environment and run the code there.
[https://replit.com/@gabriel271/Nodejs#index.js](https://replit.com/@gabriel271/Nodejs#index.js)
As you can see, this bot will get only the public items

As you can see in this demo, we have a dynamic, context-aware system that ranks our hybrid identity in real-time and returns the relevant permissions as per activity.
## Developer Experience and GenAI Security Challenges
Security challenges associated with non-human identities are threats that developers need to handle directly. Traditionally, security teams have been responsible for these concerns, but the complexity and nuances of hybrid identities require a developer-centric approach. Developers who understand the intricacies of their applications are better positioned to implement and manage these security measures effectively.
The only way to ensure our applications’ resilience is to use tools that give the developer an experience that seamlessly integrates into the software developer lifecycle. The tools we used in this article are great examples of how to deliver the highest standards of application security while speaking the language of developers.
With [Permit.io](http://Permit.io "http://Permit.io"), developers (and other stakeholders) can easily modify authorization policies to respond to emerging threats or changing requirements. For instance, we can adjust our rules to prohibit bots from reading shared data. This change can be made seamlessly within the [Permit.io](http://Permit.io "http://Permit.io") dashboard, instantly affecting the authorization logic without altering the application's codebase. All developers have to do is keep a very low footprint of enforcement functions in their applications.
With Arcjet, we are taking the usual work of bot protection, which is traditionally part of the environment setup and external to the code itself, into the application code. Using the Arcjet product-oriented APIs, developers can create a dynamic protection layer without the hassle of maintaining another cycle in the software development lifecycle.
The demo we show demonstrates a better situation where those tools work together without any software setup that is outside the core of product development.
## Conclusion
In this article, we explored the evolving landscape of machine identities and the security challenges they present. We discussed the importance of decoupling application decision logic from code to maintain flexibility and adaptability. By leveraging dynamic authorization tools like [Permit.io](http://Permit.io "http://Permit.io"), we can enhance our application's security without compromising on developer experience.
To learn more about your possibilities with Permit’s fine-grained authorization, we invite you to take a deeper tour of the Permit UI and documentation and learn how to achieve fine-grained authorization in your application. We also invite you to visit the [Arcjet live demos](https://github.com/arcjet/arcjet-js?tab=readme-ov-file#examples "https://github.com/arcjet/arcjet-js?tab=readme-ov-file#examples") to experience the other areas where they can provide great application protection with few lines of code. | gemanor |
1,916,023 | Books vs online courses vs projects? 📚 | 3rd question from the Dev Pools series. Same rules as before. Since we can't add polls, we select an... | 27,980 | 2024-07-09T13:30:00 | https://dev.to/devonremote/books-vs-online-courses-vs-projects-3eog | discuss, programming, career, learning | **3rd question** from the **Dev Pools** series.
Same rules as before. Since we can't add polls, we select an emoji.
**Which learning method works for u the most?**
❤️ - books/articles
🦄 - online courses
🔥 - hands-on projects & practice
---
Maybe sth else?
| devonremote |
1,916,024 | 2024 UI/UX Design Trends: Shaping the Future of User Experience | As the digital landscape continues to evolve, so does the world of user interface (UI) and user... | 0 | 2024-07-08T15:37:16 | https://dev.to/codebridge_tech/2024-uiux-design-trends-shaping-the-future-of-user-experience-1k10 |
As the digital landscape continues to evolve, so does the world of user interface (UI) and user experience (UX) design. In 2024, we can expect to see exciting new trends and innovations that will shape the user experience of tomorrow. In this blog post, we will explore some of the key [UI/UX design trends](https://www.codebridge.tech/services/ui-ux-design) we can anticipate in 2024.
1. Augmented Reality (AR) Integration
With the rise of augmented reality, UI/UX designers will increasingly incorporate AR elements to enhance user experiences. AR can provide users with interactive, immersive, and personalized experiences by overlaying digital elements onto the real world. From shopping experiences where users can try on virtual clothing to interactive educational content that brings subjects to life, AR integration will play a significant role in shaping UI/UX design in 2024.
2. Voice User Interface (VUI) Design
Voice assistants and smart speakers have become commonplace in our homes, and in the coming years, voice user interface design will become even more prevalent. As natural language processing and voice recognition technologies continue to improve, UI/UX designers will need to focus on creating intuitive and conversational VUI experiences. This will involve designing clear and concise voice commands, creating natural language responses, and predicting user intent accurately.
3. Microinteractions
Microinteractions are small, subtle animations or feedback that provide meaningful interactions between users and digital interfaces. In 2024, we can expect to see an increased emphasis on micro-interactions as they add depth and interest to user experiences. These microinteractions can range from small visual cues that indicate a button is being pressed to more complex animations that reflect changes in data. By leveraging microinteractions, UI/UX designers can create more engaging and intuitive interfaces.
4. Dark Mode and Low-Light Design
The dark mode trend has gained significant popularity in recent years, and it is expected to become even more prevalent in 2024. Dark mode not only provides a visually appealing aesthetic but also reduces eye strain and improves battery life, especially on OLED screens. In addition to dark mode, UI/UX designers will also focus on creating low-light designs that provide optimal readability and usability in dimly lit environments.
5. Sustainability and Eco-Friendly Design
With growing awareness of environmental issues, sustainability and eco-friendly design will play a crucial role in UI/UX design in 2024. Designers will focus on creating interfaces and experiences that are energy-efficient, use minimal resources, and promote sustainable practices. This may include features such as energy-saving modes, eco-friendly themes, and encouraging users to adopt environmentally friendly behaviors.
6. Personalization and Customization
In the age of data analytics and artificial intelligence, personalized experiences have become the norm. In 2024, UI/UX designers will continue to prioritize personalization by creating interfaces tailored to individual user preferences and behaviors. Whether it's personalized recommendations, adaptive interfaces, or customizable layouts, users will expect interfaces that cater to their unique needs and preferences.
**Conclusion**
The world of UI/UX design is constantly evolving, and 2024 promises to bring exciting new trends and innovations. From augmented reality integration to sustainable design practices, designers will need to adapt to create intuitive, engaging, and personalized user experiences. By keeping an eye on these 2024 UI/UX design trends, designers can stay ahead of the curve and shape the future of user experience.
| codebridge_tech | |
1,916,025 | React Native or Flutter for App development | A post by Aadarsh Kunwar | 0 | 2024-07-08T15:37:40 | https://dev.to/aadarshk7/react-native-or-flutter-for-app-development-6hg | appdevelopment, ai, appdeveloper, flutter | aadarshk7 | |
1,916,027 | How to Use Text Underline Offset in Tailwind CSS | To use text underline offset in Tailwind CSS, you can use the underline-offset-{amount} utility... | 0 | 2024-07-10T15:32:00 | https://larainfo.com/blogs/how-to-use-text-underline-offset-in-tailwind-css/ | tailwindcss, webdev | To use text underline offset in Tailwind CSS, you can use the underline-offset-{amount} utility class. Here's how to apply it:
1. Add the underline class to enable underlining.
2. Use underline-offset-{amount} to set the offset.
The available amounts are:
underline-offset-auto
- underline-offset-0
- underline-offset-1
- underline-offset-2
- underline-offset-4
- underline-offset-8
```html
<p class="underline underline-offset-4">
This text has an underline with an offset of 4 pixels.
</p>
```

You can also combine this with other text decoration utilities like `decoration-{color}` to change the underline color.
```html
<p class="underline underline-offset-2 decoration-blue-500">
This text has a blue underline with an offset of 2 pixels.
</p>
```

**You can can use in Article Headings with Decorative Underline eg.**
```html
<article class="max-w-2xl mx-auto">
<h1 class="text-3xl font-bold mb-4 underline underline-offset-8 decoration-4 decoration-blue-500">
The Future of Web Development
</h1>
<p class="mb-4">
Web development is constantly evolving...
</p>
<h2 class="text-2xl font-semibold my-3 underline underline-offset-4 decoration-2 decoration-green-400">
1. Rise of AI-powered Tools
</h2>
<p class="mb-4">
Artificial Intelligence is making its way into web development...
</p>
<!-- More content... -->
</article>
```

[View Demo](https://play.tailwindcss.com/I2QPvtXgmU)
| saim_ansari |
1,916,029 | The Ultimate Guide to Weather REST APIs: Choosing the Right One for Your Project | Weather data is critical for many applications, from agriculture to event planning, and selecting the... | 0 | 2024-07-08T15:41:41 | https://dev.to/sameeranthony/the-ultimate-guide-to-weather-rest-apis-choosing-the-right-one-for-your-project-5aa6 | api, rest, weather, software | Weather data is critical for many applications, from agriculture to event planning, and selecting the right Weather REST API can greatly impact your project's success. In this guide, we'll explore the key aspects to consider when choosing a weather API, focusing on both commercial and navigational perspectives.
## Understanding Weather REST APIs
A **[Weather REST API](https://weatherstack.com/)** is a service that provides access to real-time weather data and forecasts via the internet. These APIs are essential for developers who need to integrate weather information into their applications. Whether you need real-time weather data for decision-making or historical data for analysis, the right API can make a significant difference.
## Key Features to Look For
## Real-Time Weather Data
One of the most critical features of a weather API is the ability to provide real-time weather data. This includes up-to-date information on temperature, humidity, wind speed, and other meteorological factors. APIs like Weatherstack and OpenWeatherMap offer robust real-time data that can be embedded into applications for instant access.
## Historical Weather Data
For applications requiring analysis of past weather conditions, access to historical data is crucial. Look for APIs that offer free historical weather data or have extensive historical databases. The free historical weather data API from Weatherstack is a great example, providing comprehensive past weather information.
## Forecasting Capabilities
Forecasting is another essential feature. The best APIs provide detailed forecasts for multiple days, which can be integrated into applications for planning and decision-making. Forecast API free options are available, offering valuable forecasting without added costs.
## Data Format and Accessibility
APIs that provide data in multiple formats, such as JSON and XML, are more versatile. The weather JSON API format is particularly popular due to its simplicity and ease of integration into various applications. Ensure the API supports formats that fit your project's needs.
## Evaluating API Providers
## Best Weather APIs
When choosing an API, consider the reputation and reliability of the provider. Some of the best weather APIs include Weatherstack, OpenWeatherMap, and Weatherbit. These providers offer comprehensive data, extensive documentation, and reliable services.
## Free Weather API Options
If budget constraints are a concern, look for free weather APIs. These APIs provide basic weather data without subscription fees. Examples include OpenWeatherMap's free tier and Weatherstack's free weather API for testing. These options are ideal for smaller projects or initial development stages.
## Global Coverage
Ensure the API provides global weather data if your application requires international weather information. APIs like Weatherstack and OpenWeatherMap offer extensive global coverage, making them suitable for worldwide applications.
## Pricing and Scalability
Consider the weather.com API pricing and other cost-related factors when selecting an API. Look for scalable pricing models that grow with your project's needs. Some providers offer flexible pricing plans that accommodate both small-scale and enterprise-level applications.
## Integrating Weather Data into Your Project
## Real-Time Weather Data in Excel
For projects requiring data analysis, integrating real-time weather data in Excel can be highly beneficial. Many APIs allow you to fetch data directly into Excel for seamless analysis. Tools like Microsoft Power Query can be used to connect Excel to the weather API.
## Building a JavaScript Weather App
Developers often use weather APIs to create dynamic applications. Building a **[JavaScript weather app](https://weatherstack.com/documentation)** is a common project that leverages real-time data to provide users with up-to-date weather information. Ensure the API you choose supports easy integration with JavaScript.
## Conclusion
Choosing the right Weather REST API for your project involves considering several factors, including real-time data capabilities, historical data access, forecasting features, data format, provider reliability, and pricing. Whether you need current weather data APIs or free APIs for testing, thorough research and careful selection will ensure your project’s success. With the right API, you can enhance your application’s functionality and provide valuable weather information to your users. | sameeranthony |
1,916,030 | Meetings vs no meetings team? | 4th question from the Dev Pools series. Same rules as before. Since we can't add polls, we select an... | 27,980 | 2024-07-11T13:30:00 | https://dev.to/devonremote/meetings-vs-no-meetings-4of6 | career, discuss, programming | **4th question** from the **Dev Pools** series.
Same rules as before. Since we can't add polls, we select an emoji.
**Do you prefer days with multiple meetings throughout the workday, or days without them?**
❤️ - I like having multiple meetings during the day
🦄 - I don't like having multiple meetings during the day
Btw, why do you think that way? | devonremote |
1,916,031 | Creating a Responsive Profile Settings UI with Tailwind CSS | In this section, we will create a user profile settings design using Tailwind CSS. This process will... | 0 | 2024-07-12T15:47:00 | https://larainfo.com/blogs/creating-a-responsive-profile-settings-ui-with-tailwind-css/ | html, tailwindcss, css, webdev | In this section, we will create a user profile settings design using Tailwind CSS. This process will involve designing a user interface that is visually appealing and user-friendly, leveraging the utility-first approach of Tailwind CSS to style various components effectively.
Create a minimalist user profile setup with Username, Email, and Password using Tailwind CSS.
```html
<div class="bg-gray-100 h-screen flex items-center justify-center">
<div class="bg-white p-8 rounded shadow-md w-full max-w-md">
<h1 class="text-2xl font-semibold mb-4">Profile Settings</h1>
<form>
<div class="mb-4">
<label for="username" class="block text-sm font-medium text-gray-600">Username</label>
<input type="text" id="username" name="username"
class="mt-1 p-2 border border-gray-300 rounded-md w-full focus:outline-none focus:ring focus:border-blue-300" />
</div>
<div class="mb-4">
<label for="email" class="block text-sm font-medium text-gray-600">Email</label>
<input type="email" id="email" name="email"
class="mt-1 p-2 border border-gray-300 rounded-md w-full focus:outline-none focus:ring focus:border-blue-300" />
</div>
<div class="mb-6">
<label for="password" class="block text-sm font-medium text-gray-600">Password</label>
<input type="password" id="password" name="password"
class="mt-1 p-2 border border-gray-300 rounded-md w-full focus:outline-none focus:ring focus:border-blue-300" />
</div>
<button type="submit"
class="w-full bg-blue-500 text-white p-2 rounded-md hover:bg-blue-600 focus:outline-none focus:ring focus:border-blue-300">
Save Changes
</button>
</form>
</div>
</div>
```

Designing user profile enhancements with Tailwind CSS: First Name, Last Name, Email, New Password, Profile Picture Update, and Account Deletion Button.
```html
<div class="bg-gray-100 min-h-screen flex items-center justify-center">
<div class="max-w-md bg-white p-8 rounded shadow-md">
<!-- Avatar Section -->
<div class="flex items-center justify-center mb-6">
<div class="w-20 h-20 mr-4 overflow-hidden rounded-full">
<img src="https://picsum.photos/200/300" alt="Avatar" class="w-full h-full object-cover" />
</div>
<div>
<label for="avatar" class="cursor-pointer text-blue-500 hover:underline">Change Profile Picture</label>
<input type="file" id="avatar" class="hidden" />
</div>
</div>
<!-- Name Section -->
<div class="grid grid-cols-2 gap-4 mb-6">
<div>
<label for="firstName" class="block text-gray-700 text-sm font-bold mb-2">First Name</label>
<input type="text" id="firstName"
class="w-full px-4 py-2 border rounded focus:outline-none focus:border-blue-500" />
</div>
<div>
<label for="lastName" class="block text-gray-700 text-sm font-bold mb-2">Last Name</label>
<input type="text" id="lastName"
class="w-full px-4 py-2 border rounded focus:outline-none focus:border-blue-500" />
</div>
</div>
<!-- Email Section -->
<div class="mb-6">
<label for="email" class="block text-gray-700 text-sm font-bold mb-2">Email</label>
<input type="email" id="email"
class="w-full px-4 py-2 border rounded focus:outline-none focus:border-blue-500" />
</div>
<!-- Password Section -->
<div class="mb-6">
<label for="password" class="block text-gray-700 text-sm font-bold mb-2">New Password</label>
<input type="password" id="password"
class="w-full px-4 py-2 border rounded focus:outline-none focus:border-blue-500" />
</div>
<!-- Buttons -->
<div class="flex justify-between">
<button
class="bg-blue-500 text-white px-4 py-2 rounded hover:bg-blue-600 focus:outline-none focus:shadow-outline-blue"
type="button">
Save Changes
</button>
<button
class="bg-red-500 text-white px-4 py-2 rounded hover:bg-red-600 focus:outline-none focus:shadow-outline-red"
type="button">
Delete Account
</button>
</div>
</div>
</div>
```

Building a Responsive Profile Settings Page with Sidebar Using Tailwind CSS.
```html
<div class="flex min-h-screen bg-gray-100">
<!-- Sidebar -->
<aside class="hidden w-1/4 bg-gray-800 text-white md:block">
<div class="p-4">
<h2 class="text-2xl font-semibold">Settings</h2>
<ul class="mt-4 space-y-2">
<li><a href="#" class="block px-4 py-2 text-sm hover:bg-gray-700">Profile</a></li>
<li><a href="#" class="block px-4 py-2 text-sm hover:bg-gray-700">Security</a></li>
<li><a href="#" class="block px-4 py-2 text-sm hover:bg-gray-700">Notifications</a></li>
</ul>
</div>
</aside>
<!-- Content -->
<div class="flex-1 p-8">
<!-- Mobile Menu Toggle Button (hidden on larger screens) -->
<div class="flex justify-end md:hidden">
<button id="menuToggle" class="text-gray-700 hover:text-gray-900 focus:outline-none">
<svg class="h-6 w-6" fill="none" stroke="currentColor" viewBox="0 0 24 24"
xmlns="http://www.w3.org/2000/svg">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M4 6h16M4 12h16m-7 6h7">
</path>
</svg>
</button>
</div>
<!-- Profile Settings -->
<div class="max-w-md rounded bg-white p-8 shadow-md">
<!-- Avatar Section -->
<div class="mb-6 flex items-center justify-center">
<div class="mr-4 h-24 w-24 overflow-hidden rounded-full">
<img src="https://picsum.photos/200/300" alt="Avatar" class="h-full w-full object-cover" />
</div>
<div>
<label for="avatar" class="cursor-pointer text-blue-500 hover:underline">Change Picture</label>
<input type="file" id="avatar" class="hidden" />
</div>
</div>
<!-- Form Section -->
<form>
<div class="grid grid-cols-2 gap-4">
<div>
<label for="firstName" class="mb-2 block text-sm font-bold text-gray-700">First Name</label>
<input type="text" id="firstName"
class="w-full rounded border px-4 py-2 focus:border-blue-500 focus:outline-none" />
</div>
<div>
<label for="lastName" class="mb-2 block text-sm font-bold text-gray-700">Last Name</label>
<input type="text" id="lastName"
class="w-full rounded border px-4 py-2 focus:border-blue-500 focus:outline-none" />
</div>
</div>
<div class="mb-6">
<label for="email" class="mb-2 block text-sm font-bold text-gray-700">Email</label>
<input type="email" id="email"
class="w-full rounded border px-4 py-2 focus:border-blue-500 focus:outline-none" />
</div>
<div class="mb-6">
<label for="password" class="mb-2 block text-sm font-bold text-gray-700">New Password</label>
<input type="password" id="password"
class="w-full rounded border px-4 py-2 focus:border-blue-500 focus:outline-none" />
</div>
<!-- Buttons -->
<div class="flex justify-end">
<button
class="focus:shadow-outline-blue rounded bg-blue-500 px-4 py-2 text-white hover:bg-blue-600 focus:outline-none"
type="button">Save Changes</button>
</div>
</form>
</div>
</div>
</div>
```
 | saim_ansari |
1,916,032 | My First Post | Hi everyone! This is my first post on dev community. Iam a Linux enthusiastic. Iam interested in... | 0 | 2024-07-08T15:43:41 | https://dev.to/nerujanp/my-first-post-49e3 | **Hi everyone!**
This is my first post on dev community. Iam a Linux enthusiastic. Iam interested in opensource and programming. Iam looking for long to learning this things in Tamil, and i found this python course. This is the starting point to learn those things. Today, i attended my 1st python class. i hope this is very useful to me . Thank you for the great opportunity.
_Neru
From Sri Lanka 🇱🇰 ♥️ _ | nerujanp | |
1,916,033 | Regra 1: O mais simples possível, mas não mais simples do que isso | Série de artigos sobre o livro As Regras da programação de Chris Zimmerman. O livro trata de 21... | 0 | 2024-07-10T01:07:51 | https://dev.to/fernanda_leite_febc2f0459/regra-1-o-mais-simples-possivel-mas-nao-mais-simples-do-que-isso-563p | programming, learning, cleancode | Série de artigos sobre o livro **As Regras da programação** de Chris Zimmerman. O livro trata de 21 regras que ajudam programadores a criarem códigos melhores. Falarei um pouco sobre cada regra do meu ponto de vista trazendo alguns exemplos e opiniões sobre o livro, com o objetivo principal de consolidar e compartilhar o conhecimento.
---
Uma das habilidades mais necessárias como programador é a capacidade de abstrair o problema e encontrar possíveis soluções. Eu acredito que esse é o ponto chave dessa regra: pensar no problema.
O autor diz _“(…) a melhor maneira de implementar uma solução para qualquer problema é a mais simples que atenda a todos os requisitos desse problema.”_. Fica obvio por essa frase que abstrair o problema é essencial. Compreender todos os desdobramentos, requisitos e até avaliar a sua complexidade.
Essa investigação sobre o problema pode nos mostrar que não existe uma solução simples para uma definição ampla do mesmo, mas se o quebrarmos em pequenos pedaços podemos encontrar soluções simples e que resolvam a parte do problema que realmente precisa ser resolvida. Sobre isso o autor diz _“Se não conseguir simplificar a solução, tente simplificar o problema”._
É melhor que o código seja mais simples (desde que resolva o problema), mas como avaliar essa simplicidade? Existe um limite pra isso? O livro expõe 3 critérios básicos para a avaliação da simplicidade: quantidade de código escrita, quantas ideias foram introduzidas e quanto tempo seria necessário para explicá-lo (facilidade de criação e facilidade de compreensão).
Particularmente acho que o primeiro critério não deve ser uma _“regra”_, nem sempre o código que possui menos linhas é o mais simples. Muitas vezes ele causa uma dificuldade de compreensão do que está sendo feito naquele trecho devido a suas implementação extremamente resumida. Uma boa abstração desses critérios é _“Um código simples é fácil de ler - e um código mais simples pode ser percorrido totalmente do início ao fim, como lemos um livro.”_ como o próprio autor diz.
Pra finalizar, quanto mais complexo fica o código mais difícil fica trabalhar com ele e progredir se torna cada vez mais lento. Sempre que puder procure oportunidades para remover complexidade ou projete soluções de forma que novos recursos não aumentem a complexidade do inicial. **Faça sua equipe trabalhar em conjunto da forma mais simples possível.** | fernanda_leite_febc2f0459 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.