id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,119,335 | 4 Errors I Made When Building A Financial Application | 1. Not knowing which datatype to use in MySQL I once heard it’s better to use integers... | 0 | 2022-06-20T12:40:56 | https://dev.to/mratiebatie/4-errors-i-made-when-building-a-financial-application-2i0e | webdev, php, finance | ## 1. Not knowing which datatype to use in MySQL
I once heard it’s better to use integers when handling financial data. You convert a price like €10 to its lowest unit (cents in this case). This way you end up using 1000 as the amount to work with. This way you avoid the floating-point problem. The floating-point problem is best shown by typing the following in your Google Chrome console:
> 0.1 + 0.2 > 0.30000000000000004
If you want to learn more about this problem visit this website. Working with integers is dramatic for readability (how much is 13310 in euros?). The disadvantage of working with integers is also that it has a limit of 2147483647 which is roughly € 21,474,836.47. Although with the euro you probably wouldn’t run into this issue quickly but with the Vietnamese Dong, this wouldn’t work. Learnings: use decimals (not floats!) in MySQL to store monetary values. Depending on how many decimals you need decimal(15,2) oftentimes is enough.
## 2. Not having something to fact-check the numbers
Imagine we have a shopping cart where there’s 1 product for € 100, the VAT of € 21 and a total of € 131. The first time you’re sharp and you immediately see your mistake. After the 100th time, you start to be blind to those mistakes.
That’s why you need something to fact-check the numbers if they’re correct. I’ve created a Google Sheet for me and my team where we can all fact-check this. Especially if you work with people who test your product but don’t have access to the code this is crucial. How should they know if the price displayed is the correct one?
## 3. Not splitting the price into all the components
Every part of a price should be stored separately. If not, there’s no way to reproduce the components if you need to later on. So save the VAT amount, the discount amount, the base price, and the total all separately. Big chance there are gonna be more price components in your app in the future.
## 4. Using foreign keys in the ‘orders’ table
One of my dumbest mistakes. I had an ‘orders’ table where all the orders of an e-commerce store were placed. Unfortunately, it had a reference to the actual products which I got the product price from. Everything was fine until one of the product prices changed and older orders were affected by it😅
I’ve made many mistakes even though I have been developing applications for years. But without resistance, there’s no growth, so I tend to share my mistakes so you might prevent them.
I’m planning on writing an ebook on developing applications where you work with money. If you’re interested you might wanna subscribe to get free access to the first chapter.
[Subscribe here](https://sjorsvdongen.gumroad.com/) | mratiebatie |
1,004,563 | Appendix: Security, Identity, and Compliance Services - AWS Certified Cloud Practitioner Study Guide | AWS Artifact No cost, self-service portal for on-demand access to AWS’ compliance reports Reports... | 17,089 | 2022-02-28T21:26:37 | https://dev.to/aidutcher/appendix-security-identity-and-compliance-services-aws-certified-cloud-practitioner-study-guide-18hg | [AWS Artifact](https://aws.amazon.com/artifact/)
- No cost, self-service portal for on-demand access to AWS’ compliance reports
- Reports include Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals
- Agreements include the Business Associate Addendum (BAA) and the Nondisclosure Agreement (NDA
[AWS Certificate Manager (ACM)](https://aws.amazon.com/certificate-manager/)
- Provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources
- Request and deploy certificates
- ACM handles renewals
- Create private certificates for internal resources
[AWS CloudHSM](https://aws.amazon.com/cloudhsm/)
- Hardware security module (HSM) that enables you to easily generate and use your own encryption keys
- Export all of your keys to most other commercially-available HSMs, subject to your configurations
[Amazon Cognito](https://aws.amazon.com/cognito/)
- Lets you add user sign-up, sign-in, and access control to your web and mobile apps
- Supports multi-factor authentication and encryption of data-at-rest and in-transit
[Amazon Detective](https://aws.amazon.com/detective/)
- Analyze and visualize security data to rapidly get to the root cause of potential security issues
- Automatically collects log data from AWS resources and uses ML, statistical analysis, and graph theory to assist with security investigations
[Amazon GuardDuty](https://aws.amazon.com/guardduty/)
- Threat detection service that continuously monitors AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation
[AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/)
- Fine-grained access control across all of AWS
- Specify who can access which services and resources, and under which conditions
[Amazon Inspector](https://aws.amazon.com/inspector/)
- Automated and continual vulnerability management
- Continually scans AWS workloads for software vulnerabilities and unintended network exposure
[AWS License Manager](https://aws.amazon.com/license-manager/)
- Create customized licensing rules that mirror the terms of licensing agreements
- Use these rules to help prevent licensing violations
- Prevent a licensing breach by stopping the instance from launching or by notifying administrators about the infringement
[Amazon Macie](https://aws.amazon.com/macie/)
- Fully managed data security and data privacy service
- Uses machine learning and pattern matching to discover and protect sensitive data
- Automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts outside those you have defined in AWS Organizations
- Findings can be searched and filtered in the AWS Management Console and sent to Amazon EventBridge
[AWS Shield](https://aws.amazon.com/shield/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc)
- Managed Distributed Denial of Service (DDoS) protection service
- Provides always-on detection and automatic inline mitigations
- All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge
[AWS WAF](https://aws.amazon.com/waf/)
- Web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources
- Create security rules that control bot traffic and block common attack patterns, such as SQL injection or cross-site scripting
| aidutcher | |
1,004,679 | New Gmail layout | Learn about the new integrated Gmail layout Gmail has a new integrated view to organize Mail, Chat,... | 0 | 2022-02-28T23:34:52 | https://dev.to/fady_nabil10/new-gmail-layout-33cd | gsuite, gmail, googlecloud | **Learn about the new integrated Gmail layout**
Gmail has a new integrated view to organize Mail, Chat, Spaces, and Meet in one place.

With the new layout, you can:
- View Google apps integrated into the Gmail main menu
- View specific app menus in the collapsible panel
- Get notified of new Chat and Space messages through notification bubbles
- Point to each app’s icon to preview what’s going on without switching context

**New app menu experience**
All Google apps in Gmail are on the farthest left menu. Click on an app's name to switch between them or point to an icon to see a preview. Each app's main menu displays in a collapsible panel.
You can hide or show the collapsible panel at any time.
To hide or show the collapsible panel, on the top left, click Show or Hide main menu Menu.
**New chat experience**
From the Chat tab, you can access individual or group chat messages.
To open the chat into a small pop-up window at the bottom of your screen, go to the top of any chat or next to the chat message in the side panel and click Open in a pop-up . The window stays in view as you move to other tabs, such as Mail or Spaces.
**Chat message notification bubbles**
When you get a new chat or space message, the notification appears on the bottom left corner in a bubble. When you point to the bubble, a message preview displays.
Reply to a message shown in a bubble:
- To open the message and reply directly from the chat or spaces tab, click the bubble.
- To open the chat into a small pop-up window at the bottom of your screen and reply, click Open in a pop-up or Reply.
To turn on notification bubbles:
1. At the top right of your Gmail window, next to your status indicator, click More options Drop down arrow and then Chat notification settings.
2. In the window that appears, check the boxes next to “Allow Chat notifications” and “Open chat bubbles for new messages.”
3. At the bottom of the window, click Done.
Tips:
- To turn notification bubbles off, uncheck the box next to “Open chat bubbles for new messages.”
- To open chat bubbles in a pop-up view instead of fullscreen, uncheck the box “Open chat bubbles in full screen.”
**Opt in to the new view**
Important: To use the new view, you must turn on Chat in Gmail and set Chat to the left hand panel. Learn how to turn on Chat in Gmail.
1. At the top right, click Settings Settings.
2. Under “Quick settings,” click Try out the new Gmail view.
3. In the new window, click Reload.
**Opt out of the new view**
1. At the top right, click Settings Settings.
2. Under “Quick settings,” click Go back to the original Gmail view.
3. In the new window, click Reload.
 | fady_nabil10 |
1,004,831 | Vue3: setup router for NotFound page | Steps 1, Create your own "NotFound.vue" page 2, Add code to main.ts: const routes = [ ... | 0 | 2022-03-01T03:34:36 | https://dev.to/bitecode/vue3-setup-router-for-notfound-page-22p6 | vue, nginx | ## Steps
1, Create your own "NotFound.vue" page
2, Add code to `main.ts`:
```typescript
const routes = [
{
path: "/",
name: "home",
component: Home,
},
// ... other paths ...
{
path: "/:pathMatch(.*)*",
name: "not-found",
component: () => import("@/pages/NotFound.vue"),
},
]
```
3, Config Nginx:
```nginx
location / {
try_files $uri $uri/ /index.html;
}
``` | bitecode |
1,005,078 | The Importance of Conversational AI in Retail | The rising E-commerce marketing and giants like Amazon and Alibaba have quickly shaped the retail... | 0 | 2022-03-01T09:08:39 | https://dev.to/sandeepgundla/the-importance-of-conversational-ai-in-retail-42m0 | conversationalai, aiinretail, chatbots, ai | The rising E-commerce marketing and giants like Amazon and Alibaba have quickly shaped the retail industry with the help of digitization. How consumers used to buy and how the retail industry is functioning now, are different. With rising competition, consumers should not avoid the significance of conversational AI in retail.
On top of digitalization, the Covid-19 pandemic played a huge role in accelerating the growth of online mediums for the retail industry. There is a whole different pattern that consumers now follow when it comes to online shopping. To meet the demands of consumers, digital retailers are heavily relying on AI and its pivotal role in directing all the retail operations globally.
Let's look at some of the major features of <a href="https://www.xcubelabs.com/services/chatbot-development/">conversational AI</a> that are important for the Retail Industry.
**24/7 Access for Consumers**
As a consumer, nobody wants to have limited access to facilities and especially in a fast-paced world. Consumers now expect a 24/7 engagement with brands they like to purchase from. For brands, it is important to serve consumers on time to maintain a repute among the masses. This is where conversational AI links these two mediums. With 24/7 accessibility that links both consumers and the brands to have a perfect shopping experience.
Retailers are adopting AI-powered chatbots and voice bots to answer customer queries related to products and deliveries. Because of their advanced digital capabilities, these bots are quick to send accurate responses to consumers giving them a feeling of the brand to consumer interaction.
**Insights with AI Data-Mining for Better Operations**
There is huge traffic in the retail sector and with more and more retail platforms emerging, this traffic will likely increase in the future. A lot of data is accumulated in minutes from the consumer which is followed by their inquiries and requests daily.
With AI assistants in retail, it is to compile, observe and deduct valuable data results upon consumer behaviors for better shopping experiences. This data result can be based upon previous interactions with the platform and also the recent ones. <a href="https://www.xcubelabs.com/services/artificial-intelligence-services/">Artificial intelligence</a> can do this due to the smart integration with mobile applications that consumers use. This helps in providing a complete overview of what's trending in the market and what's not.
**Integration of Online & Physical Retail**
Conversational AI links all the bridges between online and physical retail. The best example of this is the pandemic period where many people ordered their desired products online which they previously used to purchase physically.
Furthermore, strategies like home delivery and self-pick services were also available for people. The conversational AI boosted consumer traffic by allowing them to visit and check their products both online and offline. This step alone engaged many customers on retail platforms. Consumers were able to see inventories, they were able to avail of discount coupons, and have promotional deals. On top of this AI further introduced them to seasonal collections to insist they reach out to different stores and brands.
**Interactive Content Consumption**
Wherever you surf online you will see a gigantic amount of content. Most of it is fed to you by different retail platforms. This is a top digital development that happened in the retail sector globally. Platforms utilize content to drive profits from consumers.
When people use their mobiles specifically to search products, retail platforms analyze the products and later prefer the top reviewed brands to buy from. This mobile internet consumer traffic indirectly helps to promote retail platforms. Prime examples can be Google, Youtube, and Facebook. Youtube now displays retail prices of products due to Google's testing and data analysis. The positive side of this is consumers can buy the best when it comes to their desired items. This is all done by conversational Ai present in the retail sector.
Conversational AI is frequently guiding users in shopping and analyzing and reviewing suitable items. Chatbots and voice bots on the other hand provide helpful virtual insights to confirm user purchases.
**AI-powered Virtual Assistants Increase Profitability**
By now you would have understood what conversational AI has been doing in the retail sector and a lot of the credit for the rise of retail platforms goes to AI. The other most important area where conversational AI is currently assisting consumers is the availability of virtual assistants.
These virtual assistants analyze the queries and behaviors of the consumers and send them to relevant mediums. This helps in analyzing consumer patterns and improves the shopping experiences on retail platforms. Virtual assistants send personalized alerts and promotional deals to consumers for profitability. This helps in completing the best user and platform interference.
These virtual assistants help in generating customized data reports, consumer tasks, relevant updates, generating completing and unifying disparate systems, and contributing towards a better and fast retailing experience.
| sandeepgundla |
1,005,357 | Can you list down a few common patterns you follow in your react code base | A post by Krishna Damaraju | 0 | 2022-03-01T14:18:59 | https://dev.to/sarathsantoshdamaraju/can-you-list-down-a-few-common-patterns-you-follow-in-your-react-code-base-24l0 | discuss, webdev, react | ---
title: Can you list down a few common patterns you follow in your react code base
published: true
tags: discuss, webDev, react
---
| sarathsantoshdamaraju |
1,005,911 | Nucleoid: Low-code Framework for Node.js | Nucleoid is low-code framework for Node.js, lets you build your APIs with the help of AI and built-in datastore in declarative runtime environment | 17,027 | 2022-03-02T17:55:00 | https://dev.to/nucleoid/nucleoid-low-code-framework-for-nodejs-2395 | node, javascript, lowcode, ai | ---
series: Tutorial
description: Nucleoid is low-code framework for Node.js, lets you build your APIs with the help of AI and built-in datastore in declarative runtime environment
---
## What is Nucleoid?
[Nucleoid](https://github.com/NucleoidJS/Nucleoid) is an AI-managed low-code framework, which uses symbolic (logic-based) AI model, tracks given statements in JavaScript and creates relationships between variables, objects, and functions etc. in the graph. So, as writing just like any other codes in Node.js, the runtime translates your business logic to fully working application by managing the JS state as well as stores in the built-in data store, so that your application doesn't require external database or anything else.

### How it works
I. Write your business logic in JavaScript
II. Nucleoid runtime renders your codes
III. Creates APIs with the built-in datastore
## Hello World
```shell
> npm i nucleoidjs
```
Once installed, you can simply run with Express.js
```javascript
const nucleoid = require("nucleoidjs");
const app = nucleoid();
class Item {
constructor(name, barcode) {
this.name = name;
this.barcode = barcode;
}
}
nucleoid.register(Item);
// 👍 Only needed a business logic and 💖
// "Create an item with given name and barcode,
// but the barcode must be unique"
app.post("/items", (req) => {
const name = req.body.name;
const barcode = req.body.barcode;
const check = Item.find((i) => i.barcode === barcode);
if (check) {
throw "DUPLICATE_BARCODE";
}
return new Item(name, barcode);
});
app.listen(3000);
```
This is pretty much it, thanks to the Nucleoid runtime, only with this :point_up_2:, you successfully persisted your first object with the business logic :sunglasses:
### OpenAPI Integration with Nucleoid IDE
Nucleoid IDE is a web interface that helps to run very same npm package with OpenAPI.
[](https://nucleoid.com/ide/)


let's breaking it down:
### Defining a class
Just like classes in JS but requires to be registered before used inside Nucleoid:
```javascript
class Order {
constructor(item, qty) {
this.item = item;
this.qty = qty;
}
}
nucleoid.register(Order);
```
### API & Business Logic
```javascript
app.post("/orders", () => new Order("ITEM-123", 3));
```
```json
{
"id": "order0",
"item": "ITEM-123",
"qty": 3
}
```
Once REST called, there are couple things happen. First of all, `new Order("ITEM-123", 3)` generates unique `id` becomes part of the object as well as JSON, and that `id` can be used to retrieve the object later on. In addition, <u>the order instance is stored automatically by the runtime</u> without requiring external database.
```javascript
// Retrieve order object with "order0" id
app.post("/orders", () => Order["order0"]);
```
Another thing Nucleoid does when defining a class, it creates list under the class name and when an object initiated, it also stores inside the list.
### Query
Queries also can be done like SQL, but in JS :sunglasses:
```javascript
app.get("/orders", () => Order.filter((o) => o.item == "ITEM-123"));
app.get("/orders", () => Order.find((o) => o.id == "order0"));
```
or you can lodash it :smile:
```javascript
app.get("/orders", () => _.map(Order, (o) => ({ x: o.id, y: o.qty})));
```
## Passing HTTP info
Let's add little bit coloring with HTTP info, and make it more real :fire:
Nucleoid uses Express.js and passes `req` as `{ params, query, body }`
```javascript
app.post("/orders", (req) => new Order(req.body.item, req.body.qty));
app.get("/users/:user", (req) => User[req.params.user]);
```
> :bulb: The return value of the function automatically converted
> to JSON as shorted of `res.send(object)` in Express.js
and let's add some business logic:
```javascript
app.post("/orders", (req) => {
const qty = req.body.qty;
const item = req.body.item;
if(qty < 0) {
throw "INVALID_QTY"
}
if(item == "ITEM-123" && qty > 3) {
throw "QTY_LIMITATION"
}
return new Order(item, qty);
});
```
---
Thanks to declarative programming, we have a brand-new approach to data and logic. As we are still discovering what we can do with this powerful programming model, please join us with any types of contribution!
<center>
Learn more at <a href="https://github.com/NucleoidJS/Nucleoid">https://github.com/NucleoidJS/Nucleoid</a>
</center>
[](https://nucleoid.com) | canmingir |
1,006,229 | Hỏi đáp javascript | hello, em là thành viên mới xin chào tất cả mọi người | 0 | 2022-03-02T09:15:08 | https://dev.to/phquanghng/hoi-dap-javascript-1m5d | javascript | hello, em là thành viên mới xin chào tất cả mọi người | phquanghng |
1,006,284 | AMA with Uttam Singh, Developer Advocate at Polygon | Web3 is a revolutionary space which aims to help on building sustainable and incentivized... | 0 | 2022-03-02T11:03:03 | https://dev.to/aviyel/ama-with-uttam-singh-developer-advocate-at-polygon-2ok | ama, web3, opensource, podcast | Web3 is a revolutionary space which aims to help on building sustainable and incentivized communities. This might be quite fascinating by seeing the enthusiasm around but it is very essential to know the basis of Web3.
Aviyel is introducing brand new series for Web3 enthusiasts to get started with their journey. This includes podcasts, hands on sessions and AMAs with experts.
**[Uttam Singh](https://twitter.com/singhk_uttam)** is a Developer Advocate at Polygon. Uttam has exceptional experience with crypto and blockchain. He helps various organizations to build their community around web3 & blockchain. He worked on implementing a new Network (Bor Mainnet) to the Matic BOR chain. Uttam has also performed User Interface testing of various Polygon products like Web Wallet, Bridge, Swap. Uttam is also an open-source contributor and has made numerous contributions to projects like Numpy, Ethereum, Edamontology and also was part of GSSOC 2021.
**Feel free to ask any questions you have in the discussion thread below!**
And don't forget to RSVP to attend the live session on **Unlocking Web3 101 with Uttam Singh** on [Aviyel](https://aviyel.com/events/315/unlocking-web3-101-with-uttam-singh).
#### What will you learn in this session?
Uttam will share his experience in Web3, Tips to get started, followed by AMA.
**Topics to be discussed**
* Uttam's journey into Web3
* Introduction to Decentralization
* Tips to get started with Blockchain, Web3, and Crypto.
* Panel Discussion
See you at the Live Event 💜 | joshuapoddoku |
1,006,365 | Dependency Injection in JavaScript: Write Testable Code Easily | I struggled with two aspects of software development as a junior engineer: structuring large... | 0 | 2022-03-02T13:23:31 | https://blog.appsignal.com/2022/02/16/dependency-injection-in-javascript-write-testable-code-easily.html | javascript, node | I struggled with two aspects of software development as a junior engineer: structuring large codebases and writing testable code. Test-driven development is such a common technique that is often taken for granted, but it's not always clear how code can be made fully testable.
I remember reading examples where an author would cleanly unit test a function, and in principle, it made sense. But real code doesn't look like those examples. No matter how thoughtfully it is written, real code has some level of complexity.
Ultimately, a lot of that complexity comes down to managing dependencies. This is arguably one of the chief challenges of software engineering; to quote the [famous poem](https://web.cs.dal.ca/~johnston/poetry/island.html), "no man is an island entire of itself."
This article shares a few powerful tools to help you write testable code that grows into neat, manageable code bases.
But first, we need to ask: what are dependencies?
## What Is A Dependency?
A dependency is any external resource a program needs to work. These can be external libraries the code literally depends on or services the program functionally needs, like internet APIs and databases.
The tools we use to manage these dependencies are different, but the problems are ultimately the same. A unit of code depends on other units of code, which themselves often have dependencies. For the program to work, all dependencies must be resolved recursively.
If you're not familiar with how package managers work, you might be surprised at the complexity of this problem. However, if you've written and attempted to test a webserver that relies on a database, you're probably familiar with another version of the same problem. Luckily for us, this is a well-studied problem.
Let's take a quick look at how you can use SOLID principles to improve the maintainability and stability of your code.
## SOLID Principles
Robert Martin's SOLID principles are excellent guidelines for writing object-oriented code. I argue that two of these principles — the Single Responsibility principle and Dependency Inversion principle — can be critically important outside of OO design, as well.
### Single Responsibility Principle
The Single Responsibility principle states that a class or function should have one — and only one — purpose, and thus only one reason to change. This resembles the [UNIX philosophy](http://www.catb.org/esr/writings/taoup/html/ch01s06.html) — in essence, do one thing, and do it well. Keep your units simple and reliable, and achieve complex solutions by composing simple pieces.
For example, an Express handler function might sanitize and validate a request, perform some business logic, and store the result in a database. This function performs many jobs. Suppose we redesign it to follow the Single Responsibility principle. In that case, we move input validation, business logic, and database interactions into three separate functions that can be composed to handle a request. The handler itself does only what its name implies: handle an HTTP request.
### Dependency Inversion Principle
The Dependency Inversion principle encourages us to depend on abstractions instead of concretions. This, too, has to do with separation of concerns.
To return to our Express handler example, if the handler function directly depends on a database connection, this introduces a host of potential problems. Say we notice our site is underperforming and decide to add caching; now we'll need to manage two different database connections in our handler function, potentially repeating cache checking logic over and over throughout the codebase and increasing the likelihood of bugs.
What's more, the business logic in the handler typically won't care about the details of the cache solution; all it needs is the data. If we instead depend on an abstraction of our database, we can keep changes in persistence logic contained and reduce the risk that a small change will force us to rewrite a ton of code.
The problem I've found with these principles is often in their presentation; it's difficult to present them on a general level without a fair bit of hand-waving.
I want to explain them concretely. Let's look at how to break a large, difficult-to-test handler function into small, testable units using these two principles.
### Example: An Overwhelmed Express Handler for Node.js
Our example is an Express handler function that accepts a POST request and creates a listing on a job board for Node.js developers. It validates the input and stores the listing. If the user is an approved employer, the post is made public immediately, otherwise, it is marked for moderation.
```ts
const app = express();
app.use(express.json());
let db: Connection;
const title = { min: 10, max: 100 };
const description = { min: 250, max: 10000 };
const salary = { min: 30000, max: 500000 };
const workTypes = ["remote", "on-site"];
app.post("/", async (req, res) => {
// validate input
const input = req.body?.input;
try {
const errors: Record<string, string> = {};
if (
input.jobTitle.length < title.min ||
input.jobTitle.length > title.max
) {
errors.jobTitle = `must be between ${title.min} and ${title.max} characters`;
}
if (
input.description.length < description.min ||
input.jobTitle.length > description.max
) {
errors.description = `must be between ${description.min} and ${description.max} characters`;
}
if (Number(input.salary) === NaN) {
errors.salary = `salary must be a number`;
} else if (input.salary < salary.min || input.salary > salary.max) {
errors.salary = `salary must be between ${salary.min} and ${salary.max}`;
}
if (!workTypes.includes(input.workType.toLowerCase())) {
errors.workType = `must be one of ${workTypes.join("|")}`;
}
if (Object.keys(errors).length > 0) {
res.status(400);
return res.json(errors);
}
} catch (error) {
res.status(400);
return res.json({ error });
}
const userId = req.get("user-id");
try {
// retrieve the posting user and check privileges
const [[user]]: any = await db.query(
"SELECT id, username, is_approved FROM user WHERE id = ?",
[userId]
);
const postApprovedAt = Boolean(user.is_approved) ? new Date() : null;
const [result]: any = await db.query(
"INSERT INTO post (job_title, description, poster_id, salary, work_type, approved_at) VALUES (?, ?, ?, ?, ?, ?)",
[
input.jobTitle,
input.description,
user.id,
input.salary,
input.workType,
postApprovedAt,
]
);
res.status(200);
res.json({
ok: true,
postId: result.insertId,
});
} catch (error) {
res.status(500);
res.json({ error });
}
});
```
This function has a lot of problems:
**1. It does too many jobs to be practically testable.**
We can't test that validation works without being connected to a functioning database, and we can't test storing and retrieving posts from the database without building fully-fledged HTTP requests.
**2. It depends on a global variable.**
Maybe we don't want tests polluting our development database. How can we instruct the function to use a different database connection (or even a mock) when the database connection is hard-coded as global?
**3. It's repetitive.**
Any other handler that needs to retrieve a user from their ID will essentially duplicate code from this handler.
### Layered Architecture for Separation of Concerns in JavaScript
Suppose each function or class performs only one action. In that case, a function needs to handle the user interaction, another needs to perform the desired business logic, and another needs to interact with the database.
A common visual metaphor for this that you're likely familiar with is a _layered architecture_. A layered architecture is often depicted as four layers stacked on top of one another, with the database at the bottom and the API interface at the top.
When thinking about injecting dependencies, though, I find it more useful to think of these layers like the layers of an onion. Each layer must contain all of its dependencies to function, and **only** the layer that immediately touches another layer may interact with it directly:

The presentation layer, for example, should not interact directly with the persistence layer; the business logic should be in the business layer, which may then call the persistence layer.
It may not be immediately clear why this is beneficial — it certainly can sound like we are just making rules for ourselves to make things harder. And it actually may take longer to write code this way, but we are investing time in making the code readable, maintainable, and testable down the road.
### Separation of Concerns: An Example
Here's what actually happens when we start separating concerns. We'll start with classes to manage the data stored in the database (part of the persistence layer):
```ts
// Class for managing users stored in the database
class UserStore {
private db: Connection;
constructor(db: Connection) {
this.db = db;
}
async findById(id: number): Promise<User> {
const [[user]]: any = await this.db.query(
"SELECT id, username, is_approved FROM user WHERE id = ?",
[id]
);
return user;
}
}
```
```ts
// Class for managing job listings stored in the database
class PostStore {
private db: Connection;
constructor(db: Connection) {
this.db = db;
}
async store(
jobTitle: string,
description: string,
salary: number,
workType: WorkType,
posterId: number,
approvedAt?: Date
): Promise<Post> {
const [result]: any = await this.db.query(
"INSERT INTO post (job_title, description, poster_id, salary, work_type, approved_at) VALUES (?, ?, ?, ?, ?, ?)",
[jobTitle, description, posterId, salary, workType, approvedAt]
);
return {
id: result.insertId,
jobTitle,
description,
salary,
workType,
posterId,
};
}
}
```
Notice these classes are incredibly simple — in fact, they're simple enough to not need to be classes at all. You could write a function returning plain-old JavaScript objects or even "function factories" to inject dependencies into your functional units. Personally, I like to use classes, as they make it very easy to associate a set of methods with their dependencies in a logical unit.
But JavaScript was not born as an object-oriented language, and many JS and TS developers prefer a more functional or procedural style. Easy! Let's use a function that returns a plain object to achieve the same goal:
```ts
// Service object for managing business logic surrounding posts
export function PostService(userStore: UserStore, postStore: PostStore) {
return {
store: async (
jobTitle: string,
description: string,
salary: number,
workType: WorkType,
posterId: number
) => {
const user = await userStore.findById(posterId);
// if posting user is trusted, make the job available immediately
const approvedAt = user.approved ? new Date() : undefined;
const post = await postStore.store(
jobTitle,
description,
salary,
workType,
posterId,
approvedAt
);
return post;
},
};
}
```
One disadvantage of this approach is that there isn't a well-defined type for the service object that's returned. We need to explicitly write one and mark it as the return type of the function, or use TypeScript utility classes elsewhere to derive the type.
We're already starting to see the benefits of separation of concerns here. Our business logic now depends on the abstractions of the persistence layer rather than the concrete database connection. We can assume the persistence layer will work as expected from inside the post service. The only job of the business layer is to enforce business logic, then pass persistence duty off to the store classes.
Before testing the new code, we can rewrite our handler function with injected dependencies using a very simple function factory pattern. Now, this function's only job is to validate an incoming request and pass it off to the application's business logic layer. I'll spare you the boredom of the input validation since we should be using a well-tested third-party library for this anyway.
```ts
export const StorePostHandlerFactory =
(postService: ReturnType<typeof PostService>) =>
async (req: Request, res: Response) => {
const input = req.body.input;
// validate input fields ...
try {
const post = await postService.store(
input.jobTitle,
input.description,
input.salary,
input.workType,
Number(req.headers.userId)
);
res.status(200);
res.json(post);
} catch (error) {
res.status(error.httpStatus);
res.json({ error });
}
};
```
This function returns an Express handler function with all contained dependencies. We call the factory with the required dependencies and register it with Express, just like our previous inline solution.
```ts
app.post("/", StorePostHandlerFactory(postService));
```
I feel pretty comfortable saying the structure of this code is more logical now. We have atomic units, be they classes or functions, that can be tested independently and re-used when needed. But have we measurably improved the testability of the code? Let's try writing some tests and find out.
### Testing Our New Units
Observing the Single Responsibility principle means that we only unit test the one purpose a unit of code fulfills.
An ideal unit test for our persistence layer does not need to check that primary keys increment correctly. We can take the behavior of lower layers for granted or even replace them entirely with hard-coded implementations. In theory, if all our units behave correctly on their own, they will behave correctly when they compose (though this is obviously not always true — it's the reason we write integration tests.)
Another goal we mentioned is that unit tests shouldn't have side effects.
For persistence layer unit tests, this means that our development database is not affected by the unit tests we run. We can accomplish this by mocking the database, but I would argue that containers and virtualization are so cheap today that we may as well just use a real, but different, database for testing.
In our original example, this would be impossible without altering the app's global configuration or mutating a global connection variable in each test. Now that we're injecting dependencies, though, it's actually really easy:
```ts
describe("PostStore", () => {
let testDb: Connection;
const testUserId: number = 1;
beforeAll(async () => {
testDb = await createConnection("mysql://test_database_url");
});
it("should store a post", async () => {
const post = await postStore.store(
"Senior Node.js Engineer",
"Lorem ipsum dolet...",
78500,
WorkType.REMOTE,
testUserId,
undefined
);
expect(post.id).toBeDefined();
expect(post.approvedAt).toBeFalsy();
expect(post.jobTitle).toEqual("Senior Node.js Engineer");
expect(post.salary).toEqual(78500);
});
});
```
With only five lines of setup code, we're now able to test our persistence code against a separate, isolated test database.
### Mocking on the Fly with Jest
But what if we want to test a unit in a "higher" layer, such as a business layer class? Let's look at the following scenario:
> Given the job listing data from a user who is not pre-approved for immediate publishing, the post
> service should store a post with a null `approved_at` timestamp.
Because we're only testing business logic, we don't need to test the process of storing or pre-approving an application user. We don't even need to test that the job posting is actually stored in an on-disk database.
Thanks to the magic of runtime reflection and the underlying dynamic nature of JavaScript, our testing framework will likely let us replace those components with hard-coded "mocks" on the fly. [Jest](https://jestjs.io/), a popular JavaScript testing library, comes with this functionality baked in, and many other libraries provide it as well (such as [SinonJS](https://sinonjs.org/)).
Let's write a test for this scenario, isolating it from any actual persistence or database logic using some simple mocks.
```ts
describe("PostService", () => {
let service: ReturnType<typeof PostService>;
let postStore: PostStore;
let userStore: UserStore;
const testUserId = 1;
beforeAll(async () => {
const db = await createConnection("mysql://test_database_url");
postStore = new PostStore(db);
userStore = new UserStore(db);
service = PostService(userStore, postStore);
});
it("should require moderation for new posts from unapproved users", async () => {
// for this test case, the user store should return an unapproved user
jest
.spyOn(userStore, "findById")
.mockImplementationOnce(async (id: number) => ({
id,
username: "test-user",
approved: false,
}));
// mocking the post store allows us to validate the data being stored, without actually storing it
jest
.spyOn(postStore, "store")
.mockImplementationOnce(
async (
jobTitle: string,
description: string,
salary: number,
workType: WorkType,
posterId: number,
approvedAt?: Date | undefined
) => {
expect(approvedAt).toBeUndefined();
return {
id: 1,
jobTitle,
description,
salary,
workType,
posterId,
approvedAt,
};
}
);
const post = await service.store(
"Junior Node.js Developer",
"Lorem ipsum dolet...",
47000,
WorkType.REMOTE,
testUserId
);
expect(post.id).toEqual(1);
expect(post.posterId).toEqual(testUserId);
});
});
```
### Benefits of Mocking
Mocking, here, is simply temporarily replacing functions or class methods with predictable replacements (that have no external dependencies), inside which we can:
1. Test the data that higher layers pass in.
2. Fully control the behavior of layers of code lower than the layer we are currently testing.
That last part is incredibly powerful. It allows us to do things like test whether specific types of errors return accurate HTTP status codes, without actually having to break things to create those errors.
We don't need to disconnect from the test database to test if a connection refused error from the database results in a 500 Internal Server Error in the HTTP response. We can simply mock the persistence code that calls the database and throw the same exception we would see in that scenario. Isolating our tests and testing small units allows us to test much more thoroughly, so we can be sure that the behavior depended on by higher layers is correctly specified.
In well-isolated unit tests, we can mock any dependency. We can replace third-party web APIs with mock HTTP clients that are faster, cheaper, and safer than the real thing. If you want to ensure your application behaves correctly when an external API has an outage, you can replace it with a dependency that always returns a 503 for a subset of tests.
I know I'm really selling mocking here, but understanding the power of mock dependencies in small, focused unit tests was a kind of revelation for me. I'd heard the expression "don't test the framework" dozens of times, but it was only when mocking that I finally understood how it was possible to only test the behavior you're responsible for as a developer. It made my life much easier, and I hope this information can make yours easier, too.
### A Note on Test Frameworks When Mocking Dependencies
I used Jest in the above example. However, a more universal (and in some ways superior) way of mocking dependencies in object-oriented code is through polymorphism and inheritance.
You can extend dependency classes with mock method implementations or define your dependencies as interfaces and write entirely isolated classes that fulfill those interfaces for testing purposes. Jest is just more convenient because it lets you easily mock a method once without defining new types.
## Dependency Injection Libraries for TypeScript and JavaScript
Now that we're starting to think about dependencies as a sort of directed graph, you might notice how quickly the process of instantiating and injecting dependencies might become tiresome.
Several libraries are available for TypeScript and JavaScript to automatically resolve your dependency graph. These require you to manually list the dependencies of a class or use a combination of runtime reflection and decorators to infer the shape of your graph.
[Nest.js](https://nestjs.com/) is a notable framework that uses dependency injection, with a combination of decorators and explicit dependency declaration.
For existing projects, or if you don't want the weight of an opinionated framework like Nest, libraries like [TypeDI](https://github.com/typestack/typedi) and [TSyringe](https://github.com/microsoft/tsyringe) can help.
## Summing Up
In this post, we've taken a concrete example of an overwhelmed function and replaced it with a composition of smaller, testable units of code. Even if we accomplish identical lines-of-code test coverage for both versions, we can know exactly what broke and why when tests fail in the new version.
Before, we only generally knew that **something** broke, and we'd likely find ourselves digging through error messages and stack traces to figure out what input led to an exception, what the breaking change was, etc.
I hope this concrete example has helped to explain the two critical SOLID principles of single responsibility and dependency inversion.
It's worth noting that this is not the hammer for every nail. Our end goals are maintainability and reliability, and simple code is easier to maintain. Inversion of control is a great tool for managing complexity, but it is not a reason to introduce undue complexity to a simple program.
Until next time, happy coding!
**P.S. If you liked this post, [subscribe to our JavaScript Sorcery list](https://blog.appsignal.com/javascript-sorcery) for a monthly deep dive into more magical JavaScript tips and tricks.**
**P.P.S. If you need an APM for your Node.js app, go and [check out the AppSignal APM for Node.js](https://www.appsignal.com/nodejs).** | nate_anderson |
1,006,704 | Detecting bit-rot with md5deep | Thanks to luxagen for getting me to actually set something up for this. Turned out to be mighty... | 0 | 2022-06-10T23:22:59 | https://timwise.co.uk/2022/03/02/detecting-bit-rot-with-md5deep/ | ---
title: Detecting bit-rot with md5deep
published: true
date: 2022-03-02 00:00:00 UTC
tags:
canonical_url: https://timwise.co.uk/2022/03/02/detecting-bit-rot-with-md5deep/
---
Thanks to [luxagen](http://luxagen.com/) for getting me to actually set something up for this. Turned out to be mighty useful when I accidentally trashed half my home folder and wanted to know if [syncthing](https://syncthing.net/) had propagated any of the damage.
The use case is slightly different to mine, but [RotKraken](https://github.com/luxagen/RotKraken) is worth a look. Its unique feature is storing file hashes in the extended attributes of the same file. This is very tidy but doesn’t help me with catching unwanted deletions, hence going back to md5deep.
You’d think that running [md5deep aka hashdeep](https://github.com/jessek/hashdeep) wouldn’t be worthy of a blog post, but what I found is that the primary use case for hashdeep is actually validating the integrity of an installation in order to detect rootkits etc. This is not the same as what I’m doing which is being able to spot if I’ve lost any files I care about in `/home/tim` through carelessness or [bit-rot](https://en.wikipedia.org/wiki/Data_degradation). It turns out that md5deep does actually have what I needed, but the way the options are described means it’s not at all obvious that it would fulfill this need.
You can find [my hash and verify scripts as a gist here](https://gist.github.com/timabell/f70f34f8933b2abaf42789f8afdbd7d5)
It turns out the magic is in the “audit” section of the docs.
The terminology of the verification output is more about what it did than why you care. The important one for spotting bit-rot when verifying is `Known file not used` which means that you have a hash but you no longer have a matching file anywhere. Either you deleted it or modified it on purpose or you’ve just lost something you care about. Time to reach for the backups. I like [back-in-time](https://backintime.readthedocs.io/) to usb disks for backup.
## Hashing
```
hashdeep -c md5 -of -r -l Music Documents > hash_file.txt
```
[https://explainshell.com/explain?cmd=hashdeep+-c+md5+-of+-r+-l+Music+Documents](https://explainshell.com/explain?cmd=hashdeep+-c+md5+-of+-r+-l+Music+Documents)
Output:
```
%%%% HASHDEEP-1.0
%%%% size,md5,filename
## Invoked from: /home/tim
## $ hashdeep -c md5 -of -r -l Music Downloads Documents Pictures Phone Dropbox repo
##
3425,3ecc5852703f3846298b381bc2510a39,Music/checksums-verification-Music.txt
461,456e92277eaf9de695bd1229d80f059b,Music/checksums-verification-Music.txt.bak
100663,0d9d53e95e5d80fa43a64f5d02f25b1e,Music/checksums-Music.txt
794926,95c1558e7c97200140c37ffb0d12669d,Music/flac/Mordecai Smyth - Dial M For Mordecai/cover.jpg
17618302,cfb11490aacfdcbb79fd4310cf834e01,Music/flac/Mordecai Smyth - Dial M For Mordecai/Mordecai Smyth - Dial M For Mordecai - 02 Psychedelic Sarah.flac
...
```
## Verifying
```
hashdeep -k hash_file.txt -rle -of Music Documents -avv
```
[https://explainshell.com/explain?cmd=hashdeep+-k++hash\_file.txt+-rle+-of+Music+Documents+-avv](https://explainshell.com/explain?cmd=hashdeep+-k++hash_file.txt+-rle+-of+Music+Documents+-avv)
Output:
```
Documents/hashdeep-checksums-verification.txt.bak: Moved from Documents/hashdeep-checksums-verification.txt
Documents/hashdeep-checksums-verification.txt: No match
Documents/hashdeep-checksums.txt: No match
repo/rust-kata/.idea/workspace.xml: No match
repo/rust-kata/.git/logs/HEAD: No match
...
Documents/hashdeep-checksums-verification.txt.bak: Known file not used
Documents/hashdeep-checksums.txt: Known file not used
repo/rust-kata/.idea/workspace.xml: Known file not used
...
```
## Workflow
I have a monthly calendar reminder to run backups. When that goes off I:
1. Plug a backup HDD in and run back-in-time to update the backup
2. Run `verify-hashes.sh` and search the output for “Known file not used” to find any rot or churn.
3. Run `rehash.sh` to update the hashes.
4. Sleep easy.
I run one hash file for all folders. I started with one per top level folder but that meant the verify couldn’t spot things moved between folders and it reported them as missing.
It would be nice to iterate on this but it’s a good start. | timabell | |
1,253,146 | Introduction to Cuelang | Introduction to Cuelang | 0 | 2022-11-11T18:42:15 | https://dev.to/eminetto/introduction-to-cuelang-2631 | cue, cuelang, go | ---
title: Introduction to Cuelang
published: true
description: Introduction to Cuelang
tags: cue, cuelang, go, golang
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2022-11-11 18:40 +0000
---
I bet that at that moment, you are thinking:
> "Another programming language"?
Calm down, calm down, come with me, and it will make sense :)
Unlike other languages like Go or Rust, which are "general-purpose languages," [CUE](https://cuelang.org) has some particular objectives. Its name is actually an acronym that stands for "Configure Unify Execute," and according to the official documentation:
> Although the language is not a general-purpose programming language, it has many applications, such as data validation, data templating, configuration, querying, code generation, and even scripting.
It is described as a "superset of JSON" and is heavily inspired by Go. Or, as I like to think:
> "Imagine that Go and JSON had a romance, and the fruit of that union was CUE" :D
In this post, I will present two scenarios where the language can be used, but the official [documentation](https://cuelang.org/docs/) has more examples and a good amount of information to consult.
## Validating data
The first scenario where CUE excels is in data validation. It has native [support](https://cuelang.org/docs/integrations/) for validating YAML, JSON, and Protobuf, among others.
I'll use some examples of [configuration files](https://doc.traefik.io/traefik/user-guides/crd-acme/) from the Traefik project, an API Gateway.
The following YAML defines a valid route to Traefik:
```yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`your.example.com`) && PathPrefix(`/notls`)
kind: Rule
services:
- name: whoami
port: 80
```
With this information, it is possible to define a new route in API Gateway, but if something is wrong, we can cause some problems. That's why it's essential to have an easy way to detect issues in configuration files like this. And that's where CUE shows its strength.
The first step is to have the language installed on the machine. As I'm using macOS, I just ran the command:
```bash
brew install cue-lang/tap/cue
```
In the official [documentation](https://cuelang.org/docs/install/), you can see how to install it on other operating systems.
Now we can use the `cue` command to turn this YAML into a `schema` of the CUE language:
```bash
cue import traefik-simple.yaml
```
A file called `traefik-simple.cue` is created with the contents:
```go
apiVersion: "traefik.containo.us/v1alpha1"
kind: "IngressRoute"
metadata: {
name: "simpleingressroute"
namespace: "default"
}
spec: {
entryPoints: [
"web",
]
routes: [{
match: "Host(`your.example.com`) && PathPrefix(`/notls`)"
kind: "Rule"
services: [{
name: "whoami"
port: 80
}]
}]
}
```
It's a literal translation from YAML to CUE, but let's edit it to create some validation rules. The final content of `traefik-simple.cue` looks like this:
```go
apiVersion: "traefik.containo.us/v1alpha1"
kind: "IngressRoute"
metadata: {
name: string
namespace: string
}
spec: {
entryPoints: [
"web",
]
routes: [{
match: string
kind: "Rule"
services: [{
name: string
port: >0 & <= 65535
}]
}]
}
```
Some of the items were exactly the same, like `apiVersion: "traefik.containo.us/v1alpha1"` and `kind: "IngressRoute."` This means that these are the exact values expected in all files that will be validated by this `schema.` Any value different from these will be considered an error. Other information has changed, such as:
```go
metadata: {
name: string
namespace: string
}
```
In this snippet, we define that the content of `name,` for example, can be any valid `string.` In the excerpt `port: >0 & <= 65535`, we define that this field can only accept a number between 0 and 65535.
It is now possible to validate that the YAML content conforms to the `schema` using the command:
```bash
cue vet traefik-simple.cue traefik-simple.yaml
```
If everything is correct, nothing is displayed on the command line. To demonstrate how it works, I altered `traefik-simple. yaml`, changing the value of `port` to `0`. Then, when rerunning the command, you can see the error:
```bash
cue vet traefik-simple.cue traefik-simple.yaml
spec.routes.0.services.0.port: invalid value 0 (out of bound >0):
./traefik-simple.cue:16:10
./traefik-simple.yaml:14:18
```
If we change any of the expected values, such as `kind: IngressRoute` to something different, such as `kind: Ingressroute,` the result is a validation error:
```go
cue vet traefik-simple.cue traefik-simple.yaml
kind: conflicting values "IngressRoute" and "Ingressroute":
./traefik-simple.cue:2:13
./traefik-simple.yaml:2:8
```
This way, finding an error in a Traefik route configuration is very easy. The same can be applied to other formats like JSON, Protobuf, Kubernetes files, etc.
I see an obvious scenario of using this data validation power: adding a step in CI/CDs to use CUE and validate configurations at `build` time, avoiding problems in the `deploy` stage and application execution. Another scenario is to add the commands in a `hook` of Git to validate the configurations in the development environment.
Another exciting feature of CUE is the possibility of creating `packages,` which contain a series of `schemas` that can be shared between projects in the same way as a `package` in Go. In the official [documentation](https://cuelang.org/docs/concepts/packages/#packages), you can see how to use this feature and some [native](https://cuelang.org/docs/concepts/packages/#builtin-packages) `packages` of the language, such as `strigs,` `lists,` `regex,` etc. We'll use a `package` in the following example.
## Configuring applications
Another usage scenario for CUE is as an application configuration language. Anyone who knows me knows I have no appreciation for YAML (to say the least), so any other option catches my eye. But CUE has some exciting advantages:
- Because it is JSON-based, reading and writing are much simpler (in my opinion)
- Solves some JSON issues like missing comments, which was a winning feature for YAML
- Because it is a complete language, it is possible to use `if,` `loop,` built-in packages, type inheritance, etc.
The first step for this example was creating a package to store our configuration. For that, I made a directory called `config,` and inside it, a file called `config.cue` with the content:
```go
package config
db: {
user: "db_user"
password: "password"
host: "127.0.0.1"
port: 3306
}
metric: {
host: "http://localhost"
port: 9091
}
langs: [
"pt_br",
"en",
"es",
]
```
The next step was to create the application that reads the configuration:
```go
package main
import (
"fmt"
"cuelang.org/go/cue"
"cuelang.org/go/cue/load"
)
type Config struct {
DB struct {
User string
Password string
Host string
Port int
}
Metric struct {
Host string
Port int
}
Langs []string
}
// LoadConfig loads the Cue config files, starting in the dirname directory.
func LoadConfig(dirname string) (*Config, error) {
cueConfig := &load.Config{
Dir: dirname,
}
buildInstances := load.Instances([]string{}, cueConfig)
runtimeInstances := cue.Build(buildInstances)
instance := runtimeInstances[0]
var config Config
err := instance.Value().Decode(&config)
if err != nil {
return nil, err
}
return &config, nil
}
func main() {
c, err := LoadConfig("config/")
if err != nil {
panic("error reading config")
}
//a struct foi preenchida com os valores
fmt.Println(c.DB.Host)
}
```
One advantage of CUE's `package` concept is that we can break our configuration into smaller files, each with its own functionality. For example, inside the `config` directory, I split `config. Cue` into separate files:
*config/db.cue*
```go
package config
db: {
user: "db_user"
password: "password"
host: "127.0.0.1"
port: 3306
}
```
*config/metric.cue*
```go
package config
metric: {
host: "http://localhost"
port: 9091
}
```
*config/lang.cue*
```go
package config
langs: [
"pt_br",
"en",
"es",
]
```
And it was not necessary to change anything in the `main.go` file for the settings to be loaded. With this, we can better separate the contents from the settings without impacting the application code.
## Conclusion
I just "scratched the surface" of what's possible with CUE in this post. It has been [attracting attention](https://twitter.com/kelseyhightower/status/1329620139382243328?s=61&t=mVll7YR0fRVtNeZLEVwKnA) and being adopted in projects such as [Istio](https://istio.io/), which it uses to generate OpenAPI `schemes` and CRDs for Kubernetes and [Dagger](https://docs.dagger.io/1215/what-is-cue/). It is a tool that can be very useful for several projects, mainly due to its data validation power. And as a replacement for YAML, for my personal joy :D
Originally published at [https://eltonminetto.dev](https://eltonminetto.dev/en/post/2022-11-08-intro-cuelang/) on November 08, 2022. | eminetto |
1,007,917 | The 3 Best Full-Stack Hosting Platforms (2022) | I’ve compiled the 3 best full-stack hosting platforms based on 3 criteria: Ease of Use (You don’t... | 0 | 2022-03-03T16:25:34 | https://webdevwithseb.com/2022/03/03/the-3-best-full-stack-hosting-platforms-2022/?utm_source=rss&utm_medium=rss&utm_campaign=the-3-best-full-stack-hosting-platforms-2022 | fullstack, hosting, platform, webdev | ---
title: The 3 Best Full-Stack Hosting Platforms (2022)
published: true
date: 2022-03-03 13:52:00 UTC
tags: FullStack,Hosting,fullstack,hosting,platform,Platform,webdev
canonical_url: https://webdevwithseb.com/2022/03/03/the-3-best-full-stack-hosting-platforms-2022/?utm_source=rss&utm_medium=rss&utm_campaign=the-3-best-full-stack-hosting-platforms-2022
---
I’ve compiled the 3 best full-stack hosting platforms based on 3 criteria:
1. **Ease of Use** (You don’t need to learn/know dev ops)
2. **Generous free tiers**
3. **Affordability**
> _Disclaimer: Since we are focusing on **full-stack apps** we will ignore popular front end hosting platforms like Vercel and Netlify. These are great solutions for deploying frontends, but they do not let you deploy your own backend._
### 1) [Railway](https://railway.app?referralCode=9Y13IY)

Railway launched in 2020. The platform is growing in popularity with startups and solo developers.
Best of all, it is incredibly affordable. Their developer plan offers 100 GB of disk and 32GB of RAM. Depending on your app’s hourly usage, this can cost you a couple of dollars a month.
I am currently running [photorake](http://www.photorake.com) (lots of disk usage!), [comb](http://comb.social), and [umami](https://umami.is/) for $1 a month on Railway. This is the best pricing I’ve found across the board.
Pros
- Logs & monitoring within web UI
- Automated Git based deploys via Github
- [Flexible Pricing (only pay for what you use)](https://railway.app/pricing)
- $10 of free credits a month
- [Easy to use App Starters](https://railway.app/starters)
- Apps never go dormant due to inactivity
Cons
- Uncapped expenses when your app’s scales
### 2) [Render](https://render.com)

Render launched in 2018, and they’re positioning themselves to be the next Heroku. You can host static sites, web apps, background workers, cron jobs, Postgres databases, and Redis caches on Render.
They have a similar pricing model to Heroku where you pick the infrastructure up front based on your RAM and CPU needs. Their prices are [heavily discounted when compared to Heroku](https://render.com/render-vs-heroku-comparison), which is awesome.
Pros
- [Competitive Pricing with Heroku](https://render.com/render-vs-heroku-comparison)
- Logs & monitoring within web UI
- Automated Git based deploys via Github
- Various apps supported (static sites, bg workers, cron jobs, DBs)
- [One-click Quick Start Deployments](https://render.com/docs)
- Expenses are fixed based on plan
Cons
- Disk is $0.25/GB (3x EBS pricing on AWS)
- [Free apps go dormant after 15 minutes of inactivity](https://render.com/docs/free#free-web-services)
### 3) [Heroku](https://www.heroku.com/)

Heroku is the original developer friendly platform for full-stack web apps since 2007. It is popular with enterprises, startups, and solo developers alike.
You can host full stack apps, Postgres DBs, Redis Caches, and Apache Kafka queues. Rails and Django apps are commonly hosted on Heroku.
Pros
- Great for Rails and Django Apps
- Logs & monitoring within web UI
- Automated Git based deploys via Github
- [Robust Add-ons Ecosystem](https://elements.heroku.com/addons)
Cons
- [Free apps go dormant after 30 mins of inactivity](https://blog.heroku.com/app_sleeping_on_heroku)
### Which is my favorite?
After having tried the above platforms, I’ve decided to run my apps on Railway. The fact I don’t have to know my CPU/RAM needs up front, their affordable pricing, and the fact my apps never go dormant have won me over.

_I’m only paying 1$ a month to host 3 web apps with Railway._
I encourage you to give these 3 a try and find your favorite; you won’t be disappointed.
If this helped you narrow down a platform for your next side project, give me a follow on [twitter](https://twitter.com/_sbmsr), and [subscribe to my newsletter](https://github.us19.list-manage.com/subscribe?u=3122ad8b8bdaf8335e05ec4fa&id=53643b22fa).
Happy Hacking! 😎
The post [The 3 Best Full-Stack Hosting Platforms (2022)](https://webdevwithseb.com/2022/03/03/the-3-best-full-stack-hosting-platforms-2022/) first appeared on [💻 Web Dev With Seb](https://webdevwithseb.com). | sbmsr |
1,008,075 | The React.ReactNode type is a black hole | As developers, we use TypeScript for a few different reasons. The self-documentation aspects are huge... | 0 | 2022-03-03T16:21:51 | https://changelog.com/posts/the-react-reactnode-type-is-a-black-hole | javascript, typescript, react, webdev | ---
title: The React.ReactNode type is a black hole
published: true
date: 2022-02-16 07:00:00 UTC
tags: javascript,typescript,react,webdev
canonical_url: https://changelog.com/posts/the-react-reactnode-type-is-a-black-hole
---
As developers, we use TypeScript for a few different reasons. The self-documentation aspects are huge – being able to step into an unfamiliar function and know the shape of the objects it's expecting is a massive boon when working on a large project. The added tooling features, with [IntelliSense](https://code.visualstudio.com/docs/editor/intellisense) and its ilk, are also a big help for productivity. But to me, the most important reason to use a strongly typed system is to <mark>*eliminate* an entire class of runtime bugs</mark>, where a function gets passed an object it doesn't know how to handle and fails at runtime.
It's that last reason that leads to the purpose for this post. I recently handled a bug where a React component was throwing an exception at runtime. The source of the issue was a recent refactor done when internationalizing this area of our application, where a prop expecting a renderable `React.ReactNode` was accidentally getting passed an object of class `TranslatedText` which could not render.
This is *exactly* the sort of bug we would expect TypeScript to catch at compile time!
How did this happen? At a high level it is because the `React.ReactNode` type included in `DefinitelyTyped`, used in hundreds of thousands of codebases around the world, is so weakly defined as to be practically meaningless.
_We discussed this at a high level during the TIL segment of [JS Party #213](https://jsparty.fm/213), but I thought it deserved a more rigorous treatment._
Come along as I share the exploration, why this bug has lingered in the wild for more than 3 (!) years [since it was originally reported](https://github.com/DefinitelyTyped/DefinitelyTyped/issues/29307), and how we worked around it in our codebase <mark>to get ourselves protected again</mark>.
## The Situation
It started with a simple bug report:
```text
When I click on "Boost nudges" and attempt to select a filter group, I get an error saying something went wrong. This feature is vital for a demo I have tomorrow.
```
My first check was to see if I could reproduce it in the production application. I could. Next was to fire up a developer environment so I could get a useful backtrace, and the error was extremely clear:
<figure class="richtext-figure richtext-figure--full">
<img src="https://changelog-assets.s3.amazonaws.com/posts/2511/uncaught-error.png" alt="Browser console backtrace: react-dom.development.js:13435 Uncaught Error: Objects are not valid as a React child (found: object with keys {_stringInfo, _vars}). If you meant to render a collection of children, use an array instead.
in div (created by styled.div)
in styled.div (at DemographicCutFilterModal.tsx:159)
in DemographicCutFilterModalBody (at InsightCreationModal.tsx:330)" loading="lazy">
</figure>
Interpretation: React was trying to render something that it could not render. Using the file and line numbers to track down more, I could see that the object in question was a prop called `description` with the following type definition:
```typescript
description: string | React.ReactNode;
```
The caller was passing it instead a `TranslatedText` object, which is a class we use in our system to handle internationalization. The expected use is that this object is passed to a `<T>` component that knows how to use it and a library of strings to render text in the correct language for the current user.
Having seen this: <mark>The fix was super simple</mark>. Wrap the `TranslatedText` object in a `<T>` component before passing it in as a prop.
<figure class="richtext-figure richtext-figure--full">
<img src="https://changelog-assets.s3.amazonaws.com/posts/2511/the-fix.png" alt="code diff showing replacement of description={cms('hero.insights.create_modal.who')} with description={<T k={cms('hero.insights.create_modal.who')} />" loading="lazy">
</figure>
With this patch in place, the immediate bug was resolved, and the demo mentioned in the ticket unblocked.
Understanding how the bug came to be was super simple - this portion of the application had only recently been internationalized, and the bug was introduced in that work. But then the real puzzle started: <mark>Isn't this type of bug exactly what using TypeScript and types is supposed to prevent?</mark> How in the world had the type system allowed something that was not renderable by React to be passed into a prop with type `string | React.ReactNode`?
## The trail
When I first saw that this problem wasn't being caught, my initial thought was maybe for some reason type checking wasn't getting run at all. Maybe we had a bug with cross-module calls, or there was a problem in our configuration. But I was quickly able to rule this out by but reducing the prop type to `string` and seeing that it triggered a type error.
The next thing I tried was testing to see if somehow `TranslatedText` was somehow implementing the `React.ReactNode` interface, but adding a quick `implements` annotation to TranslatedText (i.e. `class TranslatedText implements React.ReactNode`) resulted in the compiler throwing an error. That matched my expectations, because it **DOESN’T** implement the interface - if it did, we wouldn't have had this problem in the first place!
I then started diving down into the way that `React.ReactNode` was defined. These definitions are coming from `DefinitelyTyped`, the canonical open source repository of type definitions for npm packages that don't natively include types, and the [key definitions](https://github.com/DefinitelyTyped/DefinitelyTyped/blob/2034c45/types/react/index.d.ts#L203) look like this:
```javascript
type ReactText = string | number;
type ReactChild = ReactElement | ReactText;
interface ReactNodeArray extends Array<ReactNode> {}
type ReactFragment = {} | ReactNodeArray;
type ReactNode = ReactChild | ReactFragment | ReactPortal | boolean | null | undefined;
```
There it is, in the `ReactFragment` definition!
The `ReactFragment`, which is included in the `ReactNode` type, includes an empty interface. Due to [the way that TypeScript handles excess property checks](https://2ality.com/2020/01/typing-objects-typescript.html#excess-property-checks), this means that the `ReactNode` type will accept any object *except* an object literal. For almost all intents and purposes, it is functionally equivalent to an `any` type. Even though most functions using this type will expect it to mean "something renderable by React".
At this point I brought this back to our team at [Humu](https://www.humu.com/):
<figure class="richtext-figure richtext-figure--full">
<img src="https://changelog-assets.s3.amazonaws.com/posts/2511/kball-humu-chat.png" alt="Slack message from KBall: I just found a fun hole in our typing system… context, I was trying to track down why typescript didn’t catch https://humuan.atlassian.net/browse/IER-3633 ahead of time. The problem was the prop was looking for a react node, and a caller changed to pass TranslatedText, which then errored on render. The source of the issue appears to be that React.ReactNode is almost functionally equivalent to Any. See the definition here: https://github.com/DefinitelyTyped/DefinitelyTyped/blob/2034c45/types/react/index.d.ts#L203 It includes ReactFragment which is defined as {} | ReactNodeArray so ANY OBJECT will match it. This means anywhere we’re using React.ReactNode as a type, we are essentially not covered by TypeScript. It looks like we currently use this type in 157 places, so… My inclination is that we should actively move to remove this type and add a lint to prevent it except in places where we absolutely need to allow fragments, and even then probably (?) use ReactNodeArray instead… Maybe we could define a ReactLessPermissiveNode = ReactChild | ReactNodeArray | ReactPortal | boolean | null | undefined and see if we can use that for most if not all of our usecases? What do y’all think?" loading="lazy">
</figure>
As folks dug in one of our team members discovered that that this has been a [known issue since 2018](https://github.com/DefinitelyTyped/DefinitelyTyped/issues/29307)! There is [a discussion](https://github.com/DefinitelyTyped/DefinitelyTyped/discussions/55422) that implies an intent to fix the issue, but concerns about the ripple effects of introducing a fix, and no progress for the better part of a year.
## First attempts at a fix
As we started looking at ways to address this issue in our codebase, we considered two options:
1. Moving everything in our codebase to a custom type
2. Using `patch-package` to update the React.ReactNode definition
Assessing the pros and cons of these different approaches, we felt that the `patch-package` approach would require fewer code changes and less ongoing cognitive load, but would have the disadvantage of requiring an additional dependency (and associated transient dependencies) and make it perhaps less visible what's going on.
In the end, we decided to try `patch-package` first because it would be less work. The change was super simple; we attempted a patch to the `ReactFragment` type that looked very much like the one that was proposed in the DefinitelyTyped discussion thread:
```javascript
type Fragment = {
key?: string | number | null;
ref?: null;
props?: {
children?: ReactNode;
};
}
```
While this approach didn't trigger any internal typing issues within our codebase, and resulted in the type system being able to catch the class of error that had bitten us at the beginning, it resulted in cascading type errors in calls into several React ecosystem libraries. We ran into troubles at the interface of our code into `react-beautiful-dnd`:
<figure class="richtext-figure richtext-figure--full">
<img src="https://changelog-assets.s3.amazonaws.com/posts/2511/draggable-error.png" alt="Error message on a '<Draggable>' component showing 'No overload matches this call.' and various type errors with JSX.Element not being assignable to a type of DraggableChildrenFn & ReactNode" loading="lazy" />
</figure>
After diving down the rabbit hole and trying to figure out those type issues for a little while, only to have every change result in more and more type challenges, I decided that this would require someone with more TypeScript chops than me to figure out.
<figure class="richtext-figure richtext-figure--full">
<img src="https://changelog-assets.s3.amazonaws.com/posts/2511/kball-humu-chat-2.png" alt="Slack message from KBall: OK so I’m seeing why this is still unpatched in the repo. :sob:
Updating the type for ReactFragment causes rippling problems in the typing of other open source packages, the one I’ve been struggling with for a while is in react-beautiful-dnd, but even if I hack around that (I don’t have a clean fix yet) I find more issues interacting with react-flip-toolkit.
I’m back to creating a custom type so we can be more strict within our codebase without having to trace down and fix complicated typing issues in every open source package we get that has fallen into this black hole of permissive typing" loading="lazy" />
</figure>
## The Second Approach
The second approach we tried was to create a stricter type in our codebase, find/replace to use it everywhere, and then add a linter to keep it from being used. The types file we ended up with was very similar to the one we'd tried in the patch approach:
```javascript
import { ReactChild, ReactPortal, ReactNodeArray } from 'react';
export type StrictReactFragment =
| {
key?: string | number | null;
ref?: null;
props?: {
children?: StrictReactNode;
};
}
| ReactNodeArray;
export type StrictReactNode =
| ReactChild
| StrictReactFragment
| ReactPortal
| boolean
| null
| undefined;
```
After verifying that this type actually caught the types of type error that we were trying to prevent, it was time to make the replacement across our codebase.
I briefly explored using [jscodeshift](https://github.com/facebook/jscodeshift) to automatically make the replacement. I started going down that road, but I have no prior experience using jscodeshift and it was proving tricky. As I had limited time, I decided that our codebase was small enough that running find/replace in VS Code plus manually adding the import would be tractable and much faster than continuing to try to figure out jscodeshift.
*NOTE: If anyone wants to write this codemod and send it me, I'd be happy to include it as an addendum to this post with a shoutout to you!*
One PR later, we had a much safer codebase using `StrictReactNode` everywhere, but there was one step left to make this sustainable.
## Writing an ESLint plugin
The reason `React.ReactNode` had permeated our codebase is that it is such a logical type to use in many situations. Any time you want to assert a prop is renderable by React, it's natural to reach for `React.ReactNode`.
Now we need all of our developers to instead reach for `StrictReactNode`. Leaving this to developer discretion or requiring this to be a part of manual code review and/or education seemed untenable, especially in a rapidly growing company like Humu.
To enforce the new practice and make it seamless to keep our codebase up to date and safe, we decided to write a custom ESLint linter to check for `React.ReactNode` and throw an error with a pointer to our preferred type.
This post is not about how ESLint plugins work, but in case you want to use it here is the plugin we arrived at:
```javacript
module.exports = {
create(context) {
return {
TSTypeReference(node) {
if (
node.typeName.type === 'TSQualifiedName' &&
node.typeName.left.name === 'React' &&
node.typeName.right.name === 'ReactNode'
) {
context.report(
node,
node.loc,
'React.ReactNode considered unsafe. Use StrictReactNode from humu-components/src/util/strictReactNode instead.',
);
}
},
};
},
};
```
Now if someone does by accident try to use `React.ReactNode` in a type declaration, they get an error that looks like this:
<figure class="richtext-figure richtext-figure--full">
<img src="https://changelog-assets.s3.amazonaws.com/posts/2511/eslint-rule-error.png" alt="console log showing 'error: React.ReactNode considered unsafe. Use STrictReactNode from humu-components/src/util/strictReactNode instead'" loading="lazy" />
</figure>
Linting is a part of our CI testing that occurs before any branch can be merged, so this prevents anyone from accidentally pulling in the unsafe `React.ReactNode` type and points them to the replacement type instead.
*Update*: [Mathieu TUDISCO](https://twitter.com/mathieutu) wrote a [more generalized eslint plugin with a fixer](https://gist.github.com/mathieutu/577be7f0cbeba71a894981f07fc082e3)!
## Wrapping Up
From my perspective, the entire goal of using TypeScript and a type system is to be able to prevent an entire class of bugs and make refactors like the original one that sparked this safe to do.
Having a wide open type like this in a super commonly used library is super scary. Time permitting, I will continue to work on getting this patched in DefinitelyTyped, but the ecosystem problem is large enough that this is unlikely to happen in a timely manner. Changes of this magnitude create a massive wave of ripples and types that need to be updated.
In the meantime, I <mark>highly recommend</mark> using an approach like our `StrictReactNode` to protect your codebase. | kball |
1,008,155 | iOS vs Android gaming: development issues and platforms comparison for better game development performance | No operating system is totally better than the other one. Both iOS and Android have advantages and... | 0 | 2022-03-03T17:08:20 | https://dev.to/ruslanharanin/ios-vs-android-gaming-development-issues-and-platforms-comparison-for-better-game-development-performance-m7p | gamedev, android, ios | No operating system is totally better than the other one. Both iOS and Android have advantages and limitations.
The topic of which one is better for gaming has a variety of opinions. Most of them are largely subjective. That is why, in this blog, you can find facts and figures supported by the opinions of the developers about iOS vs Android gaming development issues.
[https://innovecsgaming.com/blog/ios-vs-android-gaming-development/](https://innovecsgaming.com/blog/ios-vs-android-gaming-development/) | ruslanharanin |
1,008,723 | Writing a business plan for a startup or app solution. | A business plan is fundamental for business success. Just like you need goals and... | 0 | 2022-03-04T05:00:44 | https://dev.to/mrpaulishaili/writing-a-business-plan-for-a-startup-or-app-solution-2djm | businessplan, tutorial, development, strategicplanning | > ## A business plan is fundamental for business success.
Just like you need goals and direction to succeed in life, you need a game plan to succeed in business, and it’s not as simple as it might initially sound.
It’s not just about setting goals of what you want to achieve, you actually have to back up all your claims with real data and statistics, strategize for the future and predict eventual crisis as accurately as possible.
Usually, a business plan is presented to banks and investors to convince them to invest. For example, let’s imagine that you want to launch a start-up. You plan to create an app that Fashion Designers can use to organize their orders, receive payments, and manage customers.
There are many ways to go about this:
- You could, get a bank loan
- Give equity in exchange for money
- Join a start-up incubator or accelerator and so on.
Whichever you choose, you need to sell people your vision. First of all, no one is going to give you money for free. If you’re looking for investors, they are probably going to ask you for equity. If you’re going to talk with a bank, they are going to give you an interest rate, and so on. And, obviously, they definitely aren’t going to give you their money if you don’t have a good idea and a strategy. So, you need a business plan for your Fashion App in order to sell it to potential investors, banks, and so on.
We are going to go through developing a strategic business plan, helps you achieve your business goals and also aid to show people that you know what you’re doing, because at the end of the day, a plan that is backed up by research and facts is very compelling.
## Step One: Define your vision with precision
Your vision is what generates all the goals and strategies. To develop a game plan, you need to know exactly what you want. So, let's look at the App for restaurants example. Your vision is pretty simple. You want to create an app that will be ...
### [Download full ebook here:
](https://mega.nz/file/jo9iDTaB#342-Ib-neJ2CgezGrKBbiXhqFjEGSSrC9Kytpl4PVFI)
| mrpaulishaili |
1,009,704 | Day:37 Training at Ryaz : Mongoose-modeling relationship b/w connected data | Date:4/02/2022 Day:Friday Today I started at about 10:30 am as I was finished with seventh module... | 0 | 2022-03-04T16:26:36 | https://dev.to/mahin651/day37-training-at-ryaz-mongoose-modeling-relationship-bw-connected-data-26jn | webdev, beginners, node, programming | - Date:4/02/2022
- Day:Friday
Today I started at about 10:30 am as I was finished with seventh module so, I started with eighth module as it was about mongoose: modeling relationships between connected data so, I started with it firstly the instructor taught about model relationships between connected data, we can either reference a document or embed it in another document.so, we have these two options but,When referencing a document, there is really no relationship between these two documents. So, it is possible to reference a non-existing document.Referencing documents is a good approach when we want to enforce data consistency. Because there will be a single instance of an object in the database. But this approach has a negative impact on the performance of your queries because in MongoDB we cannot JOIN documents as we do in relational databases. So, to get a complete representation of a document with its related documents, we need to send multiple queries to the database.But Embedding documents solves this issue.And after this instructor taught about how to reference document and also discussed about implementations transaction with fawn. so, this way my day ended up and I got to learn many new things which enhanced my knowledge. | mahin651 |
1,010,199 | How To Use GitHub For Project Collaboration — Based on Agile Method | Hello Fellow CodeNewbies 👋, A few months ago, I had a great experience participating in a project... | 15,234 | 2022-03-04T23:28:00 | https://adiati.com/how-to-use-github-for-project-collaboration-based-on-agile-method | github, git, tutorial, codenewbie | Hello Fellow CodeNewbies 👋,
A few months ago, I had a great experience participating in a project collaboration program with [The Collab Lab](https://the-collab-lab.codes/). I can't thank The Collab Lab enough for giving me the opportunity and experience to learn how to collaborate with other developers 💖.
Based on that experience, I want to share with you what I learned.
So, in this article, I will walk you through how to use GitHub for project collaboration based on the Agile method.
Without further ado, let's start! 😄
---
## Setup the collaboration environment on GitHub
### 1. Create a repo for the collaboration
One of the collaborators will create a repo on their GitHub. It will be the repo for all collaborators to push their changes. This remote repo is what we call the `origin` repo.
- On the GitHub homepage, click on the "New" green button at the left bar.

It will navigate you to the "Create a new repository" page.
- Fill out the form and click the "Create repository" button.

- Copy and paste one of the options on the command line and click enter.
- If you haven't set up the local repo, follow the `…or create a new repository on the command line`.
Replace the `first commit` commit message with your own.
- If you already set up a local repo, follow the `…or push an existing repository from the command line`.
- Click the "Code" tab.
The initial files and folders are now available on the `origin` repo.
### 2. Add collaborators
- On the GitHub repo, go to the "Settings" tab.
- At the left bar, click on "Collaborators".

- Click the "Add people" button.

- Enter the collaborator's GitHub username, full name, or email they use for GitHub, then click the "Add <username> to this repository" button.

You can see the collaborators in the "Contributors" list at the right bar on the repo's homepage.

### 3. Create issues
Whether you do a solo project or a team project, breaking the tasks into small chunks is good practice.
Put these small chunks as issues on GitHub.
- Go to the "Issues" tab on the repo homepage.
- Click the "New issue" button.

- Add the title and the description.
- Add the assignees if you have decided who will handle the issue.
You can find this in the right bar.

#### 💡 Tips to write issues for adding a new feature to the project
- Write a clear and straightforward title as a [user story](https://www.atlassian.com/agile/project-management/user-stories).
```markdown
# As a user, I want to be able to add a new task to my to-do list
```
- Include Acceptance Criteria (AC).
> Acceptance criteria are the predefined requirements that must be met, taking all possible scenarios into account, to consider a user story to be finished.
— [KnowledgeHut](https://www.knowledgehut.com/blog/agile/what-are-acceptance-criteria)
```markdown
## Acceptance Criteria
- User is presented with a form to enter their new task.
- When the user submits the form, the task will be saved to the database.
```
### 4. Create project boards
⚠️ GitHub has had new updates for the projects board. And I have updated this section in my article, "[Planning And Tracking Projects With GitHub's Projects Tool](https://dev.to/adiatiayu/planning-and-tracking-projects-with-githubs-projects-tool-572l)".

Depending on your team, you can opt-in to have a project board as a tool to track your team's progress. In the Agile environment, it's common to use [Kanban](https://www.atlassian.com/agile/kanban) boards to see the state of the team's progress.
- Navigate to the "Projects" tab.
- Select the "Projects" — I'm using the non-beta one — and click the "New projects" button.

- Add the name of the project.
- Select the project template you find suitable for your team.
For this example, I am using the "Basic kanban". This option creates the "To do", "In progress", and "Done" columns.

- Click the "Create Project" button.

Now you can add the cards and, if necessary, some more columns.
#### ❓ How to add the cards and columns
1. When you have issues available and want to add them in one of the columns.
- Click on the "Add cards" tab on the right side, next to the filter bar.

- Search for the issues.
If you're not sure of the name of the issue, you can type `type:issue`. It will give you a list of the issues in the repo.

- Drag and drop the issue to the target column.
2. When you want to add a note in one of the columns.
- Click on the "+" sign on the target column.

- Enter your note and click the "Add" button.
You can also change your note into an issue:
- Click the three dots symbol on your note's card.

- Select "Convert to issue".

- Write the title and the description of the issue. Then submit it by clicking the "Convert to issue" button.

3. You can add more columns if you need. For example, if you want a column for "Ready to review" or "Ready to merge", etc.
- Click "Add column" on the right side of the page.

- Enter the column name and click the "Create column" button.
- Drag and drop the column to re-order.
You can start to use the board by dragging and dropping the cards between columns.
## Start collaborating with Git and GitHub
Now that you have set up the collaboration environment, it's time to start the project!
There would be times when you are working on an issue by yourself.
But there would also be times you tackle the same issue with others.
### 1. Create a working branch
When collaborating, it's always a good practice to create a new branch for you to work on your issue.
Creating a new branch will prevent you from pushing your changes directly into the `main` branch.
You might first want to be at the local `main` branch. Then create a new branch by running this command:
```bash
git checkout -b branch-name
```
Now you can start to work on your issue.
### 2. Pull the working branch
When you're working on the same issue with other collaborators, one of the collaborators creates a branch and pushes this branch to the `origin` repo.
Then the other collaborators will fetch the `origin` repo, pull, and navigate to this branch by running:
```bash
# fetch the origin repo
git fetch
# pull the working branch
git pull origin branch-name
# navigate to the branch
git checkout branch-name
```
This process will let you and your teammates work on the issue in the same branch.
### 3. Push the working branch to the `origin` repo
You have finished working on the issue and are ready to push your branch.
- Run this command to stage your changes for commit.
```bash
git add .
```
- Commit your changes.
Change the commit message to your message.
```bash
git commit -m "Your commit message"
```
- Make sure your local repo is in the same state as the `origin` to assure that you are pushing the most updated changes.
- Navigate to your local `main` branch by running this command:
```bash
git checkout main
```
- Pull the `main` branch from the `origin` repo to the local `main` branch.
```bash
git pull
```
If there is no update on the `main` branch, you can proceed to push your changes.
But when there are changes:
- Navigate to your working branch.
```bash
git checkout branch-name
```
- Merge the local `main` branch into your working branch.
```bash
git merge branch-name main
```
- Push your branch to the `origin` repo.
```bash
git push origin branch-name
```
#### ➡ Add `Co-authored-by:` to commit message
When you're working on the same issue with others, you want to include them in your commit message. This way, all collaborators will commit together.

Below is how you want to write your commit message:
```bash
git commit -m "Add a component for adding task
>
>
> Co-authored-by: Jane Doe <jane@email.com>
> Co-authored-by: John <john@email.com>"
```
##### 💡 Important to know
1. Exclude the person who commits and pushes the branch from the "Co-authored-by:". Only collaborators other than this person get included with "Co-authored-by:" in the commit message.
2. Commit messages are case-sensitive. So you need to make sure of these things:
- Do not close the commit message with double-quotes (") before you add the "Co-authored-by:".
- Always give two empty spaces before writing the line "Co-authored-by".
- Make sure that there is no typo.
It should be `Co-authored-by:`, with a capital C at the beginning and a colon (:) at the end.
Sometimes, you could make a typo on the word "authored". So you want to make sure to check it as well.
- Ensure that the emails are associated with the collaborators' GitHub accounts.
- Close the commit message with double-quotes (") after the last line of "Co-authored-by:".
You can read more about creating a commit with multiple authors [here](https://docs.github.com/en/pull-requests/committing-changes-to-your-project/creating-and-editing-commits/creating-a-commit-with-multiple-authors).
### 4. Create a pull request and merge
Now that you've pushed your branch to the repo, you can start to create a pull request.
- Refresh the `origin` repo page on GitHub, then click the "Compare & pull request" button.

- Enter the title and the description of the pull request. And then, click the "Create pull request" button.

If there is a conflict, you need first to solve it, then push it back to the origin repo.
- If you want your teammates to review your code, add them to the "Reviewers" on the right side of the page.

You can still add more commits to the pull request branch if you have more changes in this state.
- After your teammate reviewed your pull request:
- Make changes in the branch of your local repo if you need to.
Then add, commit, and push them to your branch on the `origin` repo by running these commands:
```bash
git add .
git commit -m "Your commit message"
git push origin branch-name
```
- When they approve your changes, click the "Merge pull request" button.
- You may want to delete the working branch on the `origin` repo. You can do so by clicking the "Delete branch" button.

You can always restore them by clicking the "Restore branch" button in your pull request.

#### ❓ How to resolve conflicts

It's common to encounter git conflicts in collaboration.
Conflicts usually occur when changes are on the same line(s), in the same file(s), from 2 different branches.
You will get notified about the conflicts when you create a pull request.
So, what should we do to resolve the conflicts?
You can do it directly on GitHub by clicking the "Resolve conflict" button.
But it would be better to resolve it in your local environment.
- Navigate to your local `main` branch.
- Pull the latest state of the remote `main` branch to your local `main` branch by running this command:
```bash
git pull
```
- Go to your working branch.
```bash
git checkout branch-name
```
- Merge your `main` branch into the working branch.
```bash
git merge branch-name main
```
- Fix the conflict.

Select one of the options:
- Accept Current Change — when you only want to keep the existing change.
- Accept Incoming Change — when you only want to keep the newest change.
- Accept Both Changes — when you want to keep existing and newest changes.
Afterward, you can fix and adjust the codes manually if necessary.
- After fixing the conflicts, add the changes for commit with `git add .`
- Commit the changes.
Don't forget to change the commit message example to your message.
```bash
git commit -m "Merge the main branch into branch-name and resolve conflicts"
```
- Push the changes to the `origin` repo.
```bash
git push origin branch-name
```
#### ❓ How to write a pull request on GitHub (with examples in markdown)
- Write a clear, descriptive title.
```markdown
# Create an AddTodo component with some functionalities to add tasks to the list
```
- Include the link to the issue.
```markdown
## Link Issue
Closes #123
```
An issue will automatically close whenever the pull request merges by adding the word "Closes" before the link to the issue.
- Include a clear description.
Write down the changes that you made in this section.
```markdown
## Description
- Created an "AddTodo" component.
- Added a form and the functionality to add tasks to the database.
```
- Mention the type of changes.
Is the type of changes adding a new feature, fixing a bug, or others?
```markdown
## Type of Changes
⭐ New feature
```
- Include screenshots, if any.
- Include the steps to test the changes.
```markdown
## Testing Steps
- Run `npm start`.
- After the page renders, navigate to the add task page by clicking the link on the homepage.
- Enter a task.
- Click submit button.
- Go to the database. The submitted task should now be available and stored in the database.
```
And that's it! 🙌
I hope you have a nice collaboration! 😄
---
Thank you for reading!
Last, you can find me on [Twitter](https://twitter.com/@AdiatiAyu). Let's connect! 😊 | adiatiayu |
1,010,493 | String Manipulation Methods to Memorize | A common tech interview question I’ve received a lot is on string manipulation. This involves a... | 0 | 2022-03-05T04:38:32 | https://dev.to/yani82/string-manipulation-methods-to-memorize-49e0 | javascript, strings, methods, techinterviews | A common tech interview question I’ve received a lot is on [string](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String) manipulation. This involves a request to return a desired value out of a given string.
In this blog, I list down the most common string manipulation methods to memorize so you can easily answer such questions when asked in tech interviews.

###Stringing Along
I was recently asked this coding challenge in an interview which was fairly easy if I wasn't so nervous and had only studied data structures and algorithms the week before.
```
// Write a function called "abbreviate" that outputs a string.
// Given a string, keep the first and last letter and replace the letters in between by the length.
// e.g. internationalization => i18n, localization => l10n
```
It took me longer than usual to remember what methods to use to achieve the output desired. Of course, using the handy `console.log`, I was able to test out some possible methods, but I still had to search for specific ones online before getting the solution. Below was what I came up with:
```
const abbreviate = (input) => {
return input.length < 3 ? input : [input[0], input.length - 2, input[input.length-1]].join('');
};
const result = abbreviate("internationalization");
console.log(result);
```
This in turn made me reevaluate ways to etch these common string manipulation methods in my head. As a result, I'm compiling the cheat list below to assist with that.
###Common String Manipulation Methods

#### str.length
- returns the length of the string
```
let str = "zen";
console.log(str.length); // outputs 3
```
#### charAt(index)
- treats the string as an array of characters
- retrieves the character at the index provided
- used to check string for consistency
- the last index is `string.length - 1`
```
let str = 'Strings';
console.log(str.chatAt(3)); // outputs i
console.log(str.charAt(6)); // outputs s
```
#### concat(string)
- concatenate two string together into one
- used to append to a string or combine them
```
const str1 = 'purple';
const str2 = 'balloon';
console.log(str1.concat(str2)); // outputs 'purple balloon'
// or by inserting string variables into another string to achieve cleaner code
const str1 = 'purple';
const str2 = 'balloon';
const sampleStr = `${purple} ${balloon}`;
console.log(sampleStr); // outputs purple balloon
```
#### includes(string)
- check whether or not a string contains a substring
```
const str = 'what up';
console.log(str.includes('what')); // true
console.log(str.includes('down')); // false
```
#### match(regex string)
- checks if a string matches a regular expression
```
const firstName = "Matt";
const badFirstName = "Matthew4";
const nameRegex = /^[a-zA-Z]+$/
console.log(firstName.match(nameRegex)); // true
console.log(badFirstName.match(nameRegex)); // false
```
#### replace(stringToBeReplaced, stringToAdd)
- takes an occurence of a character in a string and replaces it with another character
```
const userInput = '917 716 4543';
console.log(userInput.replace(' ', '-')); // 917-716-4543
```
#### split(string)
- return an array of substrings when needing to split a string
```
const seeyou = "See You";
const seeYouSplit = see.split(' ');
console.log(seeYouSplit); // ["See", "You"];
console.log(seeYouSplit[0]); // "See"
```
#### substring(index, index)
- when needing to split a string at a certain index or range of indices, you pass this function in the index of the element you want to start at, as well as the index in the string you want the substring to end at
```
const goodbye = 'Goodbye Friend';
console.log(goodbye.substring(1, 4); // ood
```
#### toLowercase()/toUppercase()
- used to make sure the string isn't case-sensitive
```
const firstName = "Yani";
console.log(firstName.toUpperCase()); // YANI
```
#### trim()
- removes whitespace from any string
```
const strWithSpace = 'Yani ';
console.log(strWithSpace.trim()); // outputs 'Yani'
```
###Conclusion
The main take away from this blog is that it's always smart to strengthen your foundational knowledge first before embarking on more complex topics like data structures and algorithms. I hope this blog was helpful!
| yani82 |
1,010,626 | SwitchMap RxJS Operator | SwitchMap RxJS Operator | 0 | 2022-03-05T06:56:16 | https://dev.to/pawankkumawat/switchmap-rxjs-operator-1m8h | javascript, angular, programming, performance | [SwitchMap RxJS Operator](https://pawan-kumawat.medium.com/rxjs-switchmap-operator-4b045e2fbbda) | pawankkumawat |
1,011,319 | Understanding Docker in a visual way (in 🎥 video): part 11 – Pass build args | Serie of videos about Docker. Explaining in a visual way Docker principles. | 15,506 | 2022-03-05T16:26:43 | https://dev.to/aurelievache/understanding-docker-in-a-visual-way-in-video-part-11-pass-build-args-30k0 | docker, devops, containers, beginners | ---
title: Understanding Docker in a visual way (in 🎥 video): part 11 – Pass build args
published: true
description: Serie of videos about Docker. Explaining in a visual way Docker principles.
tags: Docker, DevOps, containers, beginners
series: Understanding Docker in a visual way - in video
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3njdlafjgphgh4yj79mn.jpg
---
Understanding Docker can be difficult or time-consuming. In order to spread knowledge about Cloud technologies I started to create sketchnotes about Docker, then I've auto-published a book and since now I've started a new serie of video! :-)
I imagined a serie of short videos with a mix of sketchnotes and speech.
I think it could be a good way, more visual, with audio & video to explain Docker (and others technologies).
The 11th episode is talking about how to pass build args?
{% youtube yckBJXq-7ig %}
The video is in French BUT I done the subtitles in english (and french too).
And the bonus of this article, here you can find all the sketchnotes and illustrations of the video:

You can find all the commands you can see in this video and sketchnotes in the following GitHub repository:
[](https://github.com/scraly/understanding-docker-in-a-visual-way)
If you liked the video and are interested to watch another ones, please give me some feedbacks (and you can also subscribe to [my Youtube channel](https://www.youtube.com/c/AurelieVache), if you want to ❤️).
I can also give you, in an article, the full english transcript and sketchnotes in high quality, about this video, if you are interested.
If you are interested, I published all the sketchnotes on Docker (and new ones!) to make a "book" of 120 pages: ["Understanding Docker in a visual way"](https://aurelievache.gumroad.com/l/understanding-docker-visual-way).
If you like theses sketchnotes, you can follow me, I will publish others sketchs shortly :-). | aurelievache |
1,012,411 | Mastering Bash arguments with getopts | In this article, I show you how to mastering the Bash arguments with getopts to have a software who... | 0 | 2022-03-06T10:48:34 | https://dev.to/hamdyaea/mastering-bash-arguments-with-getopts-iac | bash, linux, arg, getopts | > In this article, I show you how to mastering the Bash arguments with getopts to have a software who run with professional arguments like
<pre>
mysoft.bsh -f [arg1] -g [arg2]
</pre>
I also show you how to add a default value if the argument is not given.
Here's an example of code :
<pre>
f=10
g=5
while getopts ":f:g:" option; do
case "${option}" in
f)
f=${OPTARG}
((f == 15 || f == 75)) || usage
;;
g)
g=${OPTARG}
;;
*)
usage
;;
esac
done
shift $((OPTIND-1))
if [ -z "${v}" ] || [ -z "${g}" ]; then
echo "info"
fi
echo "f = ${f}"
echo "g = ${g}"
</pre>
Now we can look at the code more closely :
Here we declare a default value to the variable -v and -g
<pre>
f=10
g=5
</pre>
Here we use getopts with the arguments -f -g. If the user write, software.bsh -f value1 and/or -g value 2.
<pre>
while getopts ":f:g:" option; do
case "${option}" in
f)
f=${OPTARG}
((f == 15 || f == 75)) || usage
;;
g)
g=${OPTARG}
;;
*)
usage
;;
esac
done
</pre>
That makes the arguments in Bash more professional.
 | hamdyaea |
1,012,558 | Step by Step guide on building a custom React hook in Typescript | According to the results of the annual survey of the State of Javascript, it doesn’t seem like React... | 0 | 2022-03-06T11:58:52 | https://relatablecode.com/step-by-step-guide-on-building-a-custom-react-hook-in-typescript/ | typescript, programming, javascript, webdev | ---
title: Step by Step guide on building a custom React hook in Typescript
published: true
date: 2022-03-06 11:54:09 UTC
tags: typescript,programming,javascript,webdevelopment
canonical_url: https://relatablecode.com/step-by-step-guide-on-building-a-custom-react-hook-in-typescript/
cover_image: https://cdn-images-1.medium.com/max/1024/0*IGKOJlNQ7xoBAx3F.jpg
---
According to the results of the annual survey of the [State of Javascript](https://relatablecode.com/important-takeaways-from-the-state-of-javascript/), it doesn’t seem like React nor Typescript is going anywhere anytime soon so it’s worth taking some time and learning how they work!
React hooks have revolutionized the way we can build React components as they tend to be considerably more intuitive than Class Components. However, one feature that isn’t taken advantage of nearly as much as it should be, is the ability to create custom hooks!
Custom hooks let us abstract away the logic of react components and re-use them! I suggest only doing this with logic that actually gets reused a ton throughout your web application.
More info about hooks can be found [here](https://reactjs.org/docs/hooks-intro.html).
For the sake of this article, the example I’m going to be creating is a **useToggle** hook! Toggling something in the UI is quite common so we should get a lot of mileage out of this one.
### Building the hook
First, let’s create the file **useToggle.ts** , and let’s build the skeleton for our hook. All hooks must begin with the word **_use_**!

A toggle hook will typically just rely on toggling a boolean state from **true** to **false** and vice versa, however, to make it more complete let’s add some additional, _optional_, functionality to the hook where we can completely set it to **false** or **true**.
Let’s create the state and the skeleton of the functions:

You should import the appropriate hooks from React itself, in this case, **useState** and **useCallback**.
```
import {useState, useCallback } from 'react';
```
The **useState** hook has access to the previous state, this is generally safer to use so we’ll just toggle it with this functionality. The other two functions, **close** and open, will set the state to either true or false directly. The state of the toggle and the three functions will get returned in an array.
### Typescript
Last but not least, let’s give our hook some type-safety by letting the function know what we are expecting to return.

We return an array with the internal state of the hook, and the 3 functions to alter the state!
As a little extra we can add an initial state to the hook in case we want it to start off as closed or opened:

### Conclusion
And that’s it! Hooks are a great way to abstract logic used in react components.
Here’s an example of the hook in action:
More content at [Relatable Code](https://relatablecode.com).
If you liked this feel free to connect with me on [LinkedIn](https://www.linkedin.com/in/diego-ballesteros-9468a7136/) or [Twitter](https://twitter.com/relatablecoder)
_Originally published at_ [_https://relatablecode.com_](https://relatablecode.com/step-by-step-guide-on-building-a-custom-react-hook-in-typescript/) _on March 6, 2022._ | diballesteros |
1,014,301 | Terminal Comand | Bugun Terminal komandalirini organdik ls hozirgi papkadagi barcha narsalarni... | 0 | 2022-03-07T11:42:24 | https://dev.to/justakrom/terminal-comand-273c | programming, terminal, cpp, uzbek | ### Bugun Terminal komandalirini organdik
# `ls`
- hozirgi papkadagi barcha narsalarni chop etadi
- `-a` tanlovi orqali yashirin fayl va papkalar ham chop etiladi
### `pwd`
-hozirgi biz turgan papka manzilini chop etadi
- `~` manzili `$HOME` manzilga teng
###`cd` (current directory)
- joriy papkani ozgartiradi, ya'ni sizni boshqa manzilga olib boradi
```
~ # cd Documents //
``` | justakrom |
1,014,626 | Difference between web2 and web3 | In this article I’ll be explaining the difference between the trending technologies we’ve been seeing... | 0 | 2022-03-07T15:23:10 | https://dev.to/tuasegun/difference-between-web2-and-web3-278o | zuri, blockchain, blockgames, web3 |
In this article I’ll be explaining the difference between the trending technologies we’ve been seeing recently which is the web 2 and web 3 technologies.
To make us understand the difference we first have to know the meaning and the similarities between each of them and the meaning of each and everyone of the the technologies.
## Web2
Web 2 describes the current state of the internet, the version that we currently use that involves user generated content and is controlled by corporations. It involves sending data via various apis storing them on centralized storage platforms or cloud platforms . It really shaped up the type of mainstream communication today because most social media apps are built on it. The major difference between web 2 and web 3 is the users identity… while web 2 uses the user’s name, his date of birth and other sensitive information to identify users web 3 uses a generated address or token as a means of identification. This makes the user identity less exposed and can reduce identity theft.
Web 2 has it’s advantages has it has helped bring media and communications to a higher level and it has generated lots of revenue for the administrators.
## Web3
Now web 3, web 3 is the 3rd version of the internet in which the internet process information in through machine learning and decentralized technology. Both users and machine will have access to the data for this to happen the data context and concept must be understood.
It has been known to us that web 3 will be built on decentralization, the block chain and it will use this in combination with artificial intelligence to interact with the real world. This will help the technology to deliver a faster and personalized user experience. Unlike web 2 interacting with the web 3 could earn you some money. This includes the usage of tokens on the blockchain, play to earn games, decentralized automated organizations and non fungible tokens.
Web 3 is a trustless, open, permissionlees and ubiquitous platform.
## The major difference between web 2.0 and web 3.0includes
1. Decentralization: Information is free and cannot be controlled and censored because it’s decentralized.
2. Web 3.0 does not require personal data or information to make payments or purchase anything
3. Web2 servers could crash while web3 uses decentralized storage systems which replicates several nodes
I hope I’ve explained perfectly the difference between web2 and web3 in this article. Thank you for your time. Drop your questions below if you have any
This article is written on behalf of [blockgames](https://blockgames.gg/), [nestcoin](https://nestcoin.com/) and [zuri](https://zuri.team/) | tuasegun |
1,014,829 | Reactjs Explore | Component Lifecycle: React web apps are actually a collection of independent components that run... | 0 | 2022-03-07T16:24:34 | https://dev.to/rajukst/reactjs-explore-1oc2 | Component Lifecycle: React web apps are actually a collection of independent components
that run according to the interactions made with them. Every React Component has a
lifecycle of its own; lifecycle of a component can be defined as the series of methods that
are invoked in different stages of the component’s existence. Mainly React component
lifecycle are 3 stages. They are Initialization, mounting, updating, unmounting.
Initialization: This is the stage where the component is constructed with the given Props
and default state. This is done in the constructor of a Component Class.
Mounting: Mounting is the stage of rendering the JSX returned by the render method itself.
Updating: Updating is the stage when the state of a component is updated and the
application is repainted.
Unmounting: As the name suggests Unmounting is the final step of the component
lifecycle where the component is removed from the page.
context API: Context API is a way to produce global variables that can be passed around.
Context API moving props from grandparent to child, child to parent. Context API works
effectively. Mainly Context API returns a consumer and provider. Provider is a component
and it suggests provides the state to its children. To create context API need to write
createContext()
Custom Hook: Custom hook is a JavaScript function which created by ourselves, when
share logic between other JavaScript functions. It allows to reuse some piece of code in
several parts of app.
Virtual Dom: Virtual DOM is a copy of the original DOM kept in the memory and synced with
the real DOM. Virtual DOM has the same properties that the real DOM. but virtual DOM lacks
the power to directly change the content. Differences between virtual and real dom: real
DOM just to get things straight forward. Virtual DOM is just a copy of real DOM. | rajukst | |
1,015,015 | We did it! | Just finished my first code challenge and the anxiety was through the roof. Had to display elements... | 0 | 2022-03-07T18:46:49 | https://dev.to/danielarmbruster0314/we-did-it-3o | webdev, beginners, programming, codenewbie | Just finished my first code challenge and the anxiety was through the roof. Had to display elements to the DOM by utilizing an API on a local server sounds a lot scarier than it turned out but definitely not a fan of timed challenges too much pressure. wishing all fellow programmers a moment of resolve. | danielarmbruster0314 |
1,015,109 | Difference between web3 and web2 | Web2 is the Internet as we know it today, whereas Web3 refers to the evolution and next generation of... | 0 | 2022-03-07T21:54:34 | https://dev.to/somtozech/difference-between-web3-and-web2-1h3a | blockchain, web3, web2 | Web2 is the Internet as we know it today, whereas Web3 refers to the evolution and next generation of the Internet.
Web2 is based on a server-client structure, meaning that a corporation or entity controls and owns your personal data in exchange for their service. In web3, anyone can participate without having to share their personal data. No single Entity controls or owns data as it stored on the blockchain.
In web2 there is a single point of failure: malicious actors may be able to take down the network by targeting the central authority while this is not possible in web3 since it is distributed.
https://blockgames.gg/, https://nestcoin.com/, https://zuri.team/ | somtozech |
1,015,285 | pnpm and Parcel based monorepo | The problem I have tried several ways of managing JavaScript/TypeScript Library Monorepos... | 0 | 2022-03-13T15:57:16 | https://dev.to/pklaschka/pnpm-and-parcel-based-monorepo-4ojc | javascript, monorepo, parcel, pnpm | ## The problem
I have tried several ways of managing JavaScript/TypeScript Library Monorepos in the past: `lerna`, `yarn workspaces`, etc.
Now don't get me wrong: These are great tools and I very much appreciate the effort their authors have put into them.
**But they always felt a little bit like gambling.** It never felt like I was really in control of what was happening (with a lot of black magic), and I found that they felt a bit ... fragile (I was alsways worried about breaking some symlinks or stuff like that when running any commands).
## A solution?
I wanted to try both [`pnpm`](https://pnpm.io/) and [Parcel](https://parceljs.org/). I had heard good things about both tools and have recently gotten more and more frustrated with their more established competitors.
When I looked at their respective documentation pages, it looked like both had great monorepo support. Since I was also still on a long-lasting search for some "building an npm library"-compatible monorepo solution with a better developer experience than what I had seen so far, I decided to give it a shot.
## The repository
So, I built (and documented) a test repository to try this new monorepo setup:
{% embed https://github.com/pklaschka/pnpm-parcel-monorepo-test %}
The repository contains a test setup with a more or less full tech stack consisting of, among others:
- TypeScript
- ESLint
- Prettier
- [fliegdoc](https://dev.to/fliegwerk/and-so-i-wrote-my-own-typescript-documentation-generator-2l2f) (a self-built documentation generator)
- jest / ts-jest
- GitHub Actions
I described most things in the `README.md`, but I also created an additional [public Notion page](https://pklaschka.notion.site/Building-and-Publishing-d444abcb209748ec9cea13b9da59bad5#96ee0ded13d54d3eacd18e02de43509f) describing more details.
## Results
**I'm really happy with how it works and will definitely use this approach in the future.** I'll also probably migrate existing monorepos to this approach, in the future.
### Advantages
- :green_circle: **you feel like you're in control** with `pnpm`, it's pretty straight-forward to understand how their workspace system works, so you feel like you're in control and don't have to guess about fixes to your problems :tada:. E.g., `pnpm install` sets up everything. Before, I always was unsure if I should now run `npm install`, `lerna bootstrap`, or something else.
- :green_circle: **quick build times** since `parcel` [builds all the packages at once](https://pklaschka.notion.site/Building-and-Publishing-d444abcb209748ec9cea13b9da59bad5#96ee0ded13d54d3eacd18e02de43509f) (instead of running build scripts one package at a time), build times (especially with the build cache in place) are incredibly fast
- :green_circle: **development experience** with `parcel watch`, it's possible to very quickly develop
- :green_circle: **"native" workspaces** working with workspaces / multiple packages feels "native" to `pnpm` (compared to its competitors, where I unfortunately found that it feels more like a "hacky side feature"). Commands work how I would expect them to work, "internal" dependencies betweeen packages [automatically get hydrated with version numbers on publish](https://pnpm.io/workspaces#publishing-workspace-packages), and so on.
### Drawbacks
Of course, every approach comes with a few drawbacks. Here are the ones I've found so far:
- :yellow_circle: **less ecosystem support** while `pnpm` and `parcel` work great in 99 % of cases, tools often don't test their support for these as much as, for example, for `yarn` and `webpack`
- :green_circle: **(no Dependabot support)** at the time of writing this, GitHub's _Dependabot_ doesn't support `pnpm`. Thankfully, [Renovate](https://docs.renovatebot.com/) seems to work well.
- :green_circle: **(no included "release automation" tooling)** `lerna` came with great Changelog / Conventional Commit / ... based release automation tooling. Unfortunately, I haven't yet found a great solution for doing something similar with `pnpm`. _Do you have any recommendations?_
### A quick fix for a Parcel bug that almost made me dismiss it
When I initially tested Parcel, it felt unstable. It wouldn't shut down, would from time to time just overwrite my `package.json`, and just overall not work very well at all.
I was almost ready to give up when I found [this issue on GitHub](https://github.com/parcel-bundler/parcel/issues/7271). It turns out that I had a `package-lock.json` somewhere higher up the file tree (probably some backup I had created before). This lead to Parcel showing all kinds of weird behavior (not only the one described in the issue). So if you decide to try this approach and feel like Parcel is "freaking out" in a weird way, it might be worth checking for `package.json`, `packaage-lock.json` or similar files higher up in the file tree.
**So overall, this is easy to fix. But this almost made me (which would have been a shame!) turn away from Parcel, so I wanted to include this note here.**
### Even more details
Furthermore, I've documented everything I learned about the process / how the repo is structured in a [Notion Page](https://pklaschka.notion.site/pnpm-and-parcel-for-monorepos-56107d7839594eec8ef5b6d0e9abcb1c). If you decide to try this monorepo configuration, this could be useful to you as it includes all the commands you need to know, links to various important resources, and so on.
## Author
{% user pklaschka %} | pklaschka |
1,015,307 | Cara menggunakan Environment Variables di NextJS | Pertama-tama buat lah sebuah file di root project dengan nama .env MY_NAME=AWAN; Enter... | 0 | 2022-03-08T02:00:43 | https://dev.to/awan/cara-menggunakan-environment-variables-di-nextjs-756 | nextjs, indonesia, pemula | Pertama-tama buat lah sebuah file di root project dengan nama .env
```
MY_NAME=AWAN;
```
Setelah itu kita bisa memanggil environment variable di codingan dengan cara
```
const name = process.env.MY_NAME;
console.log(name);
```
Dan hasilnya akan keluar seperti ini

Cara diatas hanya akan bisa diakses oleh V8 atau nodejs nya saja, ketika kita ingin akses variable tersebut di browser akan undefined seperti ini

Oleh karna itu, ada cara lain dalam mendefine environment variable yang dibutuhkan supaya browser juga bisa membacanya dengan cara menambahkan NEXT_PUBLIC_ atau seperti ini
```
NEXT_PUBLIC_MY_NAME=AWAN PUBLIC
```
Cara memanggil environment variable di codingan sama aja seperti ini
```
const name_public = process.env.NEXT_PUBLIC_MY_NAME;
console.log(name_public);
```
Dan hasilnya akan keluar seperti ini

Ketika dilihat di browser akan seperti ini

> Dalam penggunaan environment variables terkadang ada data private yang browser tidak boleh tau sehingga kita bisa menggunakan metode pertama dalam define variable dan ada kalanya kita butuh data yang browser harus tau dengan cara metode 2.
Untuk contoh projectnya saya sudah upload di github: https://github.com/awanz/nextjs-env | awan |
1,015,796 | Assign Roles and Permissions to Users | Spatie Laravel Permission | Laravel 9 Tutorial | Hello Friends, in this video we are going to assign roles and permissions to users. Laravel Admin... | 0 | 2022-03-08T07:48:15 | https://dev.to/laravellercom/assign-roles-and-permissions-to-users-spatie-laravel-permission-laravel-9-tutorial-2kpn | laravel, php, beginners, tutorial | {% embed https://youtu.be/Bze4dd4WWYU %}
Hello Friends, in this video we are going to assign roles and permissions to users.
Laravel Admin Panel.
Laravel Tutorial.
Laravel 9 Tutorial.
Spatie role and permission.
Laravel permission.
Github Repo: https://github.com/laraveller/laravel-permission
Support me:
https://www.patreon.com/Laraveller
https://paypal.me/TonyXhepa
Follow me:
Twitter: https://twitter.com/Laravellercom
Facebook Page: https://www.facebook.com/1laravellercom
Instagram: https://www.instagram.com/laravellercom/
Github: https://github.com/laraveller
Website: https://laraveller.com/
Playlists:
Laravel Testing For Beginners: https://bit.ly/3t1gNq4
Laravel Roles and Permissions: https://bit.ly/3gOhM7d
LARAVEL INERTIA MOVIE APP: https://bit.ly/3FVMp4Q
Laravel Livewire Movie App: https://bit.ly/3s8D6v1
Laravel Classified Website - https://bit.ly/3nsFRnb
Livewire Employees - https://bit.ly/2ZtIpZY
Laravel Employees Management - https://bit.ly/3Gglt14
Laravel admin panel - https://bit.ly/3CcX75M | laravellercom |
1,016,047 | Lava lamps securing the Web?🤷♂️ | computer generates random number to be used for encryption key, The more random they are the more... | 0 | 2022-03-08T12:37:22 | https://dev.to/leoantony72/lava-lamps-securing-the-web-4d20 | cryptography, security, hashing, python | computer generates random number to be used for encryption key, The more random they are the more secure.computer are made to provide predictable output based on given inputs so it isn't suitable for generating encryption key.Nowadays computers have secure lib that provide **CSPRNG**(Cryptographically-secure pseudorandom number generator) So it's suitable for using as key.
computers can also be fed with real word random data because events in the physical world are unpredictable and one of the sources of randomness cloudflare uses is **Lava Lamps**.

Cloudflare have about 100 lava lamps with camera pointed to them and at regular interval they take pictures and send them to cloudflare servers. Images are stored as series of numbers with each pixel having its value ,So each images becomes a string of **total random numbers** .This becomes their base to encrypt the internet.
**Cryptographic seed** - the data that a CSPRNG program starts with for generating random data.
**CSPRNG **needs **seed **(cryptographic seed) as a starting point from which to produce more random data.Clouflare uses a CSPRNG to generate random key with data collected from lava lamps as the seed.
that's it for this blog , see you next time with more interesting stuff...
| leoantony72 |
1,017,162 | Emojicode😎 (Variables) | Part 2 of the Series on Emojicode. Let's dive deep into coding with emojis. In this part, I'll try... | 17,219 | 2022-03-09T09:52:06 | https://dev.to/knaagar/emojicode-variables-3jde | programming, tutorial, beginners, codenewbie | _Part 2 of the Series on Emojicode._
Let's dive deep into coding with emojis.
In this part, I'll try to introduce you to what variables are and how are they implemented in Emojicode 💪.
#### Variable 🙄❓
A variable is simply a name that represents a particular piece of your computer’s memory that has been set aside for you to store, retrieve, and use data.
Variable store data types. We had a small introduction to strings in the last part - which is a _data type_!
Data types are just to tell computer what type of data it is.
#### Declaring and Assigning Variables:
Emojicode lets you declare two types of variables:

**Declaring a constant variable:**
```
🔤Earth🔤 ➡️ planet
💭🔜 planet is variable, ➡️ is assigning operator, Earth is a string and is the value of planet variable 🔚💭
```
**Declaring a mutable variable:**
```
6 ➡️ 🖍🆕 coins
💭🔜 coin is a variable, ➡️ is assigning operator, 6 is an integer and is the value of coin variable, 🖍🆕 means mutable 🔚💭
💭 Mutable variables *need* to be mutated or emojicode will smash an error on you.
```
#### String Interpolation (printing your variable):
Just declaring variable is boring 🥱, we need to print them out.
To print variables, we need to wrap them around 🧲.

#### Arithmetic Operations:
| Emoji | Operation | Example |
| ------------- |:-------------:| -----:|
|➕| addition| 2 ➕ 2 ➡️ number 💭 number is 4
|➖| subtraction| 2 ➖ 2 ➡️ number 💭 number is 0
|✖️| multiplication| 2 ✖️ 2 ➡️ number 💭 number is 4
|➗| division| 2 ➗ 2 ➡️ number 💭 number is 1
|🚮| modulo| 2 🚮 2 ➡️ number 💭 number is 0
#### Mutate the mutables 😎! Operator Assignments.
In Emojicode, operator assignment provides a short-hand method for modifying the value of variables.
Suppose we have a variable like this:
```
2 ➡️ 🖍🆕 mutate 💭 mutate is 0
```
| Emoji | Operation | Example |
| ------------- |:-------------:| -----:|
|⬅️➕| addition assignment| mutate ⬅️➕ 1 💭 mutate is now 3
|⬅️➖| subtraction assignment| mutate ⬅️➖ 1 💭 mutate is now 2
|⬅️✖️| multiplication assignment| mutate ⬅️✖️ 2 💭 mutate is 4
|⬅️➗| division assignment| mutate ⬅️➗ 2 💭 mutate is now 2
|🚮| modulo assignment|
#### Let's code a program 💻

_This is a very basic program. Pardon me if you are **hecker** 😁_

## See you in next part! Stay tuned :)
| knaagar |
1,022,495 | Designing an API for a Video Game | I am working on a video game for programmers. The game is played by writing code that interacts... | 0 | 2022-03-14T17:24:02 | https://www.jdno.dev/designing-an-api-for-a-video-game/ | gamedev, devlog | ---
title: Designing an API for a Video Game
published: true
date: 2022-03-13 17:00:00 UTC
tags: gamedev,devlog
canonical_url: https://www.jdno.dev/designing-an-api-for-a-video-game/
---

I am working on a video game for programmers. The game is played by writing code that interacts with the game through an API. This post explores the design of the API that connects the player's code, the API layer, and the game simulation.
## A Video Game for Programmers
I have been thinking about a [video game for programmers](https://www.jdno.dev/why-i-want-to-build-a-video-game-for-programmers/) for a while now. What differentiates such a game from others is that it is played by writing code. The player cannot interact with the game world themselves but only through an API. Essentially, they create a program that plays the game for them.
The time has come to make this dream a reality and build a working game. I am working on [Auto Traffic Control (ATC)](https://dev.to/jdno/auto-traffic-control-a-video-game-for-programmers-f9k), a game in which players have to safely route airplanes to an airport.
## ATC as a Distributed System
Over the past two years, I experimented with different ideas for the API and the architecture of the game. One prototype had a [REST](https://en.wikipedia.org/wiki/Representational_state_transfer) interface, another tried to wire RPC calls into the core loop inside the game engine, and there were a few that explored various asynchronous interfaces such as message queues. But all of them eventually ran into problems, with a root cause that in hindsight is totally obvious:
At its core, the game that I want to build is a [distributed system](https://en.wikipedia.org/wiki/Distributed_computing).
There are three components that are independent of each other: the player's code, the API layer, and the game simulation. The player's code runs in its own process and is totally decoupled from the game. But even the API layer and the game simulation are only loosely coupled, and run in parallel in different threads. The result is a system in which all components run concurrently, and any attempt to introduce tight coupling between them is bound to fail.
## Event Sourcing and CQRS
The architecture for the game takes a lot of inspiration from [Event Sourcing and CQRS](https://danielwhittaker.me/2020/02/20/cqrs-step-step-guide-flow-typical-application/). The player sends a _command_ to the API, which validates it and then puts it into a queue. The different systems that power the simulation inside the game take commands from the queue, and make the corresponding change to the simulation. The change in turn triggers an _event_, which is send back to the API via another queue. The API receives the event, and sends it over a stream to the player.

The diagram above visualizes this architecture inside the game. The API on the left has different services that each manage a single resource. The event source streams events to the player, while the airplane service allows players to interact with the airplanes in the game. The API is coupled with the simulation on the right through two queues: the _event bus_ and the _command bus_. The game is built with the game engine [Bevy](https://bevyengine.org/), which runs many _systems_ in parallel to simulate the game. All systems are connected to the event bus so that they can publish any change they make to the game world. And systems that can be influenced by the player are connected to the command bus as well.
This architecture has worked really well in experiments, and I can't wait to see how well it'll work for a full game. Stay tuned to learn more about that in the future!
## Follow the Project
If you're interesting in this game or programming games in general, make sure to follow along. I am not sure where the road will take us, but I am very excited for the journey!
Subscribe to my blog to receive weekly updates about the progress on the project.
[jdno.dev](https://jdno.dev)
I also stream some if not all of the development of this game on Twitch, so follow me there as well.
[jdno_dev - Twitch](https://www.twitch.tv/jdno_dev) | jdno |
1,038,906 | Weekly web development resources #115 | GitHub Readme Stats A tool to automatically generate dynamic statistics for your GitHub... | 0 | 2022-03-30T06:22:54 | https://dev.to/vincenius/weekly-web-development-resources-115-5oj | weekly, webdev |
______
##[GitHub Readme Stats](https://github.com/anuraghazra/github-readme-stats)
[](https://github.com/anuraghazra/github-readme-stats)
A tool to automatically generate dynamic statistics for your GitHub readmes.
______
##[12ft](https://12ft.io/)
[](https://12ft.io/)
A website that helps you removing paywalls from articles.
______
##[React Arborist](https://github.com/brimdata/react-arborist)
[](https://github.com/brimdata/react-arborist)
A full-featured tree component for React.
______
##[MiroTalk P2P](https://github.com/miroslavpejic85/mirotalk)
[](https://github.com/miroslavpejic85/mirotalk)
A simple and secure WebRTC real-time video conferences tool.
______
##[Hackertab](https://hackertab.dev/)
[](https://hackertab.dev/)
A website that shows the latest tech news, tools, jobs and events.
______
##[Emoji Button](https://emoji-button.js.org/)
[](https://emoji-button.js.org/)
A simple vanilla JavaScript emoji picker that supports all Unicode emojis.
______
##[Dynamic Open Graph Images](https://braydoncoyer.dev/blog/how-to-dynamically-create-open-graph-images-with-cloudinary-and-next.js)
[](https://braydoncoyer.dev/blog/how-to-dynamically-create-open-graph-images-with-cloudinary-and-next.js)
A nice article on how to dynamically create open graph images with Cloudinary and Next.js.
______
##[Hyperbeam API](https://www.hyperbeam.dev/)
[](https://www.hyperbeam.dev/)
An API to add intuitive multi-control support to any website.
______
##[Tao of Node](https://alexkondov.com/tao-of-node/)
[](https://alexkondov.com/tao-of-node/)
A list with design, architecture & best practices for Node.js.
______
##[Ladle](https://www.ladle.dev/)
[](https://www.ladle.dev/)
A library to develop and test your React stories faster.
______
To see all the weeklies check: [wweb.dev/weekly](https://wweb.dev/weekly) | vincenius |
1,113,354 | Divide a String Into Groups of Size k | A string s can be partitioned into groups of size k using the following procedure: The first... | 18,343 | 2022-06-13T23:00:48 | https://dev.to/theabbie/divide-a-string-into-groups-of-size-k-48mj | leetcode, dsa, theabbie | A string `s` can be partitioned into groups of size `k` using the following procedure:
* The first group consists of the first `k` characters of the string, the second group consists of the next `k` characters of the string, and so on. Each character can be a part of **exactly one** group.
* For the last group, if the string **does not** have `k` characters remaining, a character `fill` is used to complete the group.
Note that the partition is done so that after removing the `fill` character from the last group (if it exists) and concatenating all the groups in order, the resultant string should be `s`.
Given the string `s`, the size of each group `k` and the character `fill`, return _a string array denoting the **composition of every group**_ `s` _has been divided into, using the above procedure_.
**Example 1:**
**Input:** s = "abcdefghi", k = 3, fill = "x"
**Output:** \["abc","def","ghi"\]
**Explanation:**
The first 3 characters "abc" form the first group.
The next 3 characters "def" form the second group.
The last 3 characters "ghi" form the third group.
Since all groups can be completely filled by characters from the string, we do not need to use fill.
Thus, the groups formed are "abc", "def", and "ghi".
**Example 2:**
**Input:** s = "abcdefghij", k = 3, fill = "x"
**Output:** \["abc","def","ghi","jxx"\]
**Explanation:**
Similar to the previous example, we are forming the first three groups "abc", "def", and "ghi".
For the last group, we can only use the character 'j' from the string. To complete this group, we add 'x' twice.
Thus, the 4 groups formed are "abc", "def", "ghi", and "jxx".
**Constraints:**
* `1 <= s.length <= 100`
* `s` consists of lowercase English letters only.
* `1 <= k <= 100`
* `fill` is a lowercase English letter.
**SOLUTION:**
class Solution:
def divideString(self, s: str, k: int, fill: str) -> List[str]:
n = len(s)
s += fill * (n % k if n % k == 0 else k - n % k)
n = len(s)
op = []
for i in range(0, n, k):
op.append(s[i:i+k])
return op | theabbie |
1,115,590 | Looking for App Developers for Web3 Wallet Startup? | Add me on Telegram, or Twitter @julianauxm for details! Very interesting project for anyone looking... | 0 | 2022-06-16T03:32:02 | https://dev.to/julianauxm/looking-for-app-developers-for-web3-wallet-startup-1p99 | web3, hiring, mobile, crypto | Add me on Telegram, or Twitter @julianauxm for details!
Very interesting project for anyone looking to change the web3 space. | julianauxm |
1,181,056 | Programming Update: Aug | August was a programming-filled month for me. It focused entirely on Python and I mostly continued... | 0 | 2022-11-02T21:50:55 | https://www.ericsbinaryworld.com/2022/08/31/programming-update-aug/ | python, adventofcode, amortization, civilizationvi | ---
title: Programming Update: Aug
published: true
date: 2022-08-31 23:42:58 UTC
tags: python,AdventofCode,Amortization,CivilizationVI
canonical_url: https://www.ericsbinaryworld.com/2022/08/31/programming-update-aug/
---
August was a programming-filled month for me. It focused entirely on Python and I mostly continued working on established projects. Let’s jump in!
### Amortization
I wanted to re-calculate the amortization table for my home loan for the first time in about a year. As a refresher, I created this program (vs using Excel or an online form) because we are not consistent in the amount of extra principal payments we make. For example, if I get a bonus at work, I might throw all of that bonus into the loan payment. So this program takes variable extra payments into account when creating the amortization table.
Since I hadn’t worked on it in a year, I had a new Python version and needed to recreate my virtual environment. This led me to learn that numpy had removed their financial methods from their module. I use the numpy financial methods to calculate a straight-ahead amortization table in order to calculate the interest saved by paying extra. So I had to change my imports to fix that.
I also finally used the rich module to create a new, pretty table for the CLI output. It really is MUCH nicer than my previous attempt at using the “\t” escape to make a table.

_Part of the CLI amortization table_
Finally, I made a few minor refactors suggested by the Sourcery plugin.
### Advent of Code 2016 Day 12
I skipped ahead to the [Aoc Day 12](https://adventofcode.com/2016/day/12) problem and solved it with [Python](https://github.com/djotaku/adventofcode/blob/main/2016/Day_12/Python/solution.py). I usually do very well with these assembly language emulator problems. This one was slightly more difficult than usual. At first I tried to use functions (better for testing) but the way it was setup made it easier to just do it all in main. I did get some practice with Python 3.10’s new match-case statement, making it much nicer to read than an if/else chain.
```
"""Solution to Advent of Code 2016 Day 12: Leonardo's Monorail."""
def input_per_line(file: str):
"""This is for when each line is an input to the puzzle. The newline character is stripped."""
with open(file, 'r') as input_file:
return [line.rstrip() for line in input_file.readlines()]
if __name__ == " __main__":
our_input = input_per_line('../input.txt')
a, b, c, d, = (0, 0, 0, 0)
counter = 0
while counter < len(our_input):
# print(f"{counter=}")
# print(f"{a=}, {b=}, {c=}, {d=}, ")
components = our_input[counter].split()
# print(components)
instruction = components[0]
x = components[1]
y = 0
if len(components) == 3:
y = components[2]
match instruction:
case "cpy":
match x:
case "a":
left_side = a
case "b":
left_side = b
case "c":
left_side = c
case "d":
left_side = d
case _:
left_side = int(x)
match y:
case "a":
a = left_side
case "b":
b = left_side
case "c":
c = left_side
case "d":
d = left_side
counter += 1
case "inc":
match x:
case "a":
a += 1
case "b":
b += 1
case "c":
c += 1
case "d":
d += 1
counter += 1
case "dec":
match x:
case "a":
a -= 1
case "b":
b -= 1
case "c":
c -= 1
case "d":
d -= 1
counter += 1
case "jnz":
match x:
case "a":
number = a
case "b":
number = b
case "c":
number = c
case "d":
number = d
case _:
number = int(x)
if number != 0:
counter += int(y)
else:
counter += 1
print("Part 1:")
print(f"{a=}, {b=}, {c=}, {d=}")
a, b, c, d, = (0, 0, 1, 0)
counter = 0
while counter < len(our_input):
# print(f"{counter=}")
# print(f"{a=}, {b=}, {c=}, {d=}, ")
components = our_input[counter].split()
# print(components)
instruction = components[0]
x = components[1]
y = 0
if len(components) == 3:
y = components[2]
match instruction:
case "cpy":
match x:
case "a":
left_side = a
case "b":
left_side = b
case "c":
left_side = c
case "d":
left_side = d
case _:
left_side = int(x)
match y:
case "a":
a = left_side
case "b":
b = left_side
case "c":
c = left_side
case "d":
d = left_side
counter += 1
case "inc":
match x:
case "a":
a += 1
case "b":
b += 1
case "c":
c += 1
case "d":
d += 1
counter += 1
case "dec":
match x:
case "a":
a -= 1
case "b":
b -= 1
case "c":
c -= 1
case "d":
d -= 1
counter += 1
case "jnz":
match x:
case "a":
number = a
case "b":
number = b
case "c":
number = c
case "d":
number = d
case _:
number = int(x)
if number != 0:
counter += int(y)
else:
counter += 1
print("Part 2:")
print(f"{a=}, {b=}, {c=}, {d=}")
```
### ELDonationTracker
A while back I broke out the [Donor Drive API code](https://github.com/djotaku/DonorDrivePython) in case someone wanted to access the [Donor Drive API](https://github.com/DonorDrive/PublicAPI) without the burden of my Extra Life code or GUI. I ran into a dependency hell issue between the two projects and it took me a few weekends to get thing working correctly. Now I can finally continue innovating on the project.
### ClanGen
My oldest child got obsessed with the [Warriors series](https://en.wikipedia.org/wiki/Warriors_(novel_series)) after her teacher introduced her to it near the end of the last school year. From what she’s described, it’s basically Game of Thrones, but cats and written for a Middle Grade reading level. On YouTube she found out about the [ClanGen](https://github.com/djotaku/clangen) program which is basically a simulator that takes place in the Warriors universe. I wanted to make some improvements to the way the data is stored – moving it from CSV to JSON-based. The project is fast-moving which makes it hard to get that PR request in, but I’m trying to get these updates in.
### FastAPI and MongoDB Class
I took an awesome class (and [wrote up a review](https://dev.to/djotaku/course-review-modern-apis-with-fastapi-mongodb-and-python-1mo1-temp-slug-510548)) where I learned a lot about how to work with MongoDB and the beanie ODM.
### Civ VI Play by Cloud Webhook
The greatest amount of work and the most passion I had for coding was with this project. First, I wrote code to sort the table on the index page so that it would go from oldest to newest games, making it easy to see which games were the most delinquent. Then I decided it would be fun to also calculate the average turn length and display this on the index page. I suspect it will be fun to see the contrast there when Kaira goes back to school and Dan and Dave potentially play dozens of turns per game per day as they were doing last year. As I continued to think about interesting things I could do with the user-facing aspect of the page, I decided to make it so that a winner could be set.
As part of the user-facing pages and trying to make them more AJAX-y, I learned a lot about HTMX. I like it a lot as a way to do interactive pages without needing to learn Javascript or a JS Framework. It allows me to just focus on Python for all the logic and a little Jinja2 and HTMX for interactivity.
I also decided, as I was thinking about my endpoints more critically, that I would change the Matrix blame command to only consider active games, so I made the changes to implement that.
Finally, as a consequence of the class I took this month, I moved the data storage from a few JSON files on disk to MongoDB via the free Atlas Cloud MongoDB tier. I had a lot to learn about how MongoDB works in relation (no database pun intended) to what I was doing before. In a lot of ways it has simplified my endpoint code. And, via the database services I’ve written, has helped make a lot of the organization of data more explicit. I’m having some slight growing pains around the fact that changing fields is a little more problematic than when it’s just JSON following whatever random structure I want to use, but I think it will lead to a more responsive site, particularly if the number of games grows. As a result of using beanie for MongoDB, I also started using async all over my code. I’m not sure if I’m being inefficient with it, but it’s making the Matrix code make a bit more sense. Also, the library I’m planning on using for Discord is async, so it’s good to get used to it now. | djotaku |
1,127,348 | How to Build a Clubhouse Clone App with Android and ZEGOCLOUD - A Social Audio App Development | Because of a conversation with Musk, ClubHouse caught fire worldwide and reached a staggering 9.6... | 0 | 2022-06-29T09:00:14 | https://www.zegocloud.com/blog/clubhouse-clone | android, mobile, programming, tutorial | Because of a conversation with Musk, ClubHouse caught fire worldwide and reached a staggering 9.6 million monthly downloads.
Faced with a new social model, how to quickly clone Clubhouse's social gameplay. Enrich the interaction form of your own application.
Today we will introduce how to use [ZEGOCLOUD's ZEGOLiveAudioRoom SDK](https://docs.zegocloud.com/article/13746.html?_source=dev&article=20) to quickly build a Social Audio App in 10 minutes.

## Prerequisites
* Create a project in [ZEGOCLOUD Admin Console](https://console.zegocloud.com?_source=dev&article=20).
* [Contact us](https://www.zegocloud.com/talk?_source=dev&article=20) to activate the Live Audio Room service.
## Understand the process
The following diagram shows the basic process of creating a live audio room and taking speaker seats to speak:

## Integrate the zegoliveaudioroom SDK
To integrate the SDK, do the following:
1. Download the [Sample codes](https://github.com/ZEGOCLOUD/live_audio_room_android), copy the `zegoliveaudioroom` module to your project (if you have no project, create a new one).
2. Add the following code to the `settings.gradle` file:
```gradle
include ':zegoliveaudioroom'
```
3. Modify the `build.gradle` file of your application, add the following code to the `dependencies` node:
```gradle
implementation project(':zegoliveaudioroom')
```

4. Modify the `build.gradle` file of your project, add the following code to the `repositories` node:
```gradle
maven { url 'https://www.jitpack.io' }
```

5. Click `sync now`.
## Add permissions
Permissions can be set as needed.
Open the file `app/src/main/AndroidManifest.xml`, and add the following code:
{% embed https://gist.github.com/larryluoo/156d00974f24a01a28bbb687646b0457 %}
> Note: For Android 6.0 or later, some important permissions must be requested at runtime rather than declared statically in the file `AndroidMainfest.xml`, therefore, you need to add the following code to do so (requestPermissions is a method of an Android Activity).
{% embed https://gist.github.com/larryluoo/087a05504252e465be353c09866e6867 %}
## Prevent class name obfuscation
To prevent the ZEGOCLOUD SDK public class names from being obfuscated, you can add the following code in the file `proguard-rules.pro`.
```java
-keep class **.zego.**{*;}
```
## Initialize the zegoliveaudioroom SDK
To initialize the zegoliveaudioroom SDK, get the `ZegoRoomManager` instance, pass the AppIDof your project.
{% embed https://gist.github.com/larryluoo/684b5dbfadbf6722e40c9fd25e3a99dc %}
To receive event callbacks, call the `setListener` to listen for and handle various events as needed.
{% embed https://gist.github.com/larryluoo/883a5d2d4086317de0b6668a550b84a1 %}
## Log in
To access the signaling service of Live Audio Room with the `zegoliveaudioroom` SDK, you must log in first.
{% embed https://gist.github.com/larryluoo/e57efaa1f7699447b460bd91670ba36e %}
## Create/Join a live audio room
- You become a **Host** after creating a live audio room, and you owe more permissions, such as closing untaken speaker seats, removing a specified listener from the speaker seat, etc.
- You become a **Listener** after joining a live audio room, you can take a speaker seat to be a speaker or leave the speaker seat to become a listener again.
To create a live audio room, call the `createRoom` method:
{% embed https://gist.github.com/larryluoo/0ba8e01fdd2a3fa5c637ec9216593a89 %}
To join a live audio room, call the `joinRoom` method:
{% embed https://gist.github.com/larryluoo/ab441941b29aafabe838b251401dad56 %}
## Send/Receive messages in the room
In a live audio room, both **Host** and **Listeners** can send and receive messages.
To send messages, call the `sendTextMessage` method with message content.
{% embed https://gist.github.com/larryluoo/5b52805bcabe2d1355b63e630cc6fd08 %}
To receive messages, listen for the `ZegoMessageServiceListener` callback.
{% embed https://gist.github.com/larryluoo/8bf5364517b0d9c11803cfdb4894a05c %}
## Take a speaker seat
To take a speaker seat to speak, call the `takeSeat` method. And the SDK publishes streams simultaneously.
{% embed https://gist.github.com/larryluoo/18b4edb5bdaae08e31fa298193dd30af %}
When there is a new listener takes a seat and becomes a speaker, all participants in the room receive notifications through the `ZegoSpeakerSeatServiceListener` callback. You can set up a UI refresh action for this callback as needed.
{% embed https://gist.github.com/larryluoo/df02efba82d45656bbdafeb84754e3cb %}
---
[Sign up](https://console.zegocloud.com/account/signup?_source=dev&article=20) with ZEGOCLOUD, get **10,000 minutes** free every month.
## Did you know? 👏
> **Like** and **Follow** is the biggest encouragement to me
> **Follow me** to learn more technical knowledge
> Thank you for reading :)
## Learn more
This is one of the live technical articles. Welcome to other articles:
{% embed https://dev.to/davidrelo/movies-together-online-in-a-few-hours-source-code-inside-40o0 %}
{% embed https://dev.to/davidrelo/all-live-streaming-features-in-one-article-29n6 %}
{% embed https://dev.to/davidrelo/improve-live-streaming-experience-with-stream-mixing-345o %}
| davidrelo |
1,128,783 | Blog: Changes to Supported Modules | We are continually reviewing our Supported modules list and understanding usage and value to users.... | 0 | 2022-07-01T16:16:28 | https://puppetlabs.github.io/content-and-tooling-team/blog/posts/2022-06-30-changes-to-supported-modules/ | ---
title: Blog: Changes to Supported Modules
published: true
date: 2022-06-30 00:00:00 UTC
tags:
canonical_url: https://puppetlabs.github.io/content-and-tooling-team/blog/posts/2022-06-30-changes-to-supported-modules/
---
We are continually reviewing our Supported modules list and understanding usage and value to users. We want to ensure that the CAT team is prioritizing support for the modules that matter the most to you.
We are taking the step to remove Puppet support from the below listed models, this change will happen effective 15 July 2022. While these will not be supported any longer by Puppet, the repos will stay active for use and community contributions, as such they are not going away.
- [Helm](https://forge.puppet.com/modules/puppetlabs/helm)
- [Rook](https://forge.puppet.com/modules/puppetlabs/rook)
- [Panos](https://forge.puppet.com/modules/puppetlabs/panos)
- [Cisco-IOS](https://forge.puppet.com/modules/puppetlabs/cisco_ios)
- [Tagmail](https://forge.puppet.com/modules/puppetlabs/tagmail)
- [Vsphere](https://forge.puppet.com/modules/puppetlabs/vsphere)
- [IBM Installation Manager](https://forge.puppet.com/modules/puppetlabs/ibm_installation_manager)
- [Websphere Application Server](https://forge.puppet.com/modules/puppetlabs/websphere_application_server)
## Why are we making these changes?
We are making this move in order for us to focus on providing more robust and quality content for our other existing modules and to also allow us to bring to our user base new and exciting content that can help you manage and run your environments.
## Reminder on reporting of Issues
In order to better triage and prioritize work surrounding each module, please remember to raise issues to the GitHub repositories. We have deactivated new submissions for the Modules Jira Project and have moved existing tickets into our workflow to be reviewed.
Catch us on [Slack](https://puppetcommunity.slack.com/archives/C11LCKKQ9) for any further info or questions and look out for some exciting updates in the future. | puppetdevx | |
1,128,875 | How do you write technical documentation? | I joined dev.to to learn more about how others write and to join a growing community of writers.... | 0 | 2022-06-30T19:35:53 | https://dev.to/jordanplows/how-do-you-write-technical-documentation-37fj | I joined dev.to to learn more about how others write and to join a growing community of writers.
One problem I found is maintaining and updating technical documentation is a real pain, even for simple things like a readme.
- Do you [GitHub Actions](https://github.com/actions)?
- [MarkDoc Plugins](markdoc.io)?
What tricks or tips have you found that make this easier?
I am also considering starting a discord just for technical writers. Let me know if you'd be interested in joining. | jordanplows | |
1,129,657 | Roadmap, Quick cheatsheet, Study materials for Front End Web Development | This is the roadmap I'm following to become a Frontend Web Developer in 2022 You can do the... | 0 | 2022-07-01T17:05:47 | https://dev.to/gaurbprajapati/roadmap-quick-cheetsheet-study-materials-nia | webdev, javascript, programming, react | This is the roadmap I'm following to become a Frontend Web Developer in 2022
You can do the same!
Basics of CS
↓
HTML
↓
CSS
↓
Tailwind / Bootstrap
↓
JavaScript
↓
DOM
↓
Git (Version Control)
↓
React JS
↓
TypeScript
↓
Next JS
↓
GraphQL
You can prefer udemy paid course for complete web development bootcamp by Dr. Angela YU
https://www.udemy.com/share/1013gG/
# HTML :-
https://www.w3schools.com/html/
# CSS :-
https://www.w3schools.com/css/
# Javascript:-
DOC to learn - https://www.javascripttutorial.net/
karel ide - https://stanford.edu/~cpiech/karel/ide.html
** HTML DOM ** - //stanford.edu/~cpiech/karel/ide.html
# REACT
https://youtu.be/nTeuhbP7wdE
--------------------------------------------
# Quick cheetsheet--
HTML - https://htmlcheatsheet.com/
CSS - https://cssreference.io/
GIT - https://gitsheet.wtf/
OPEN API- https://overapi.com/
DEV HINTS- https://overapi.com/ ( This is a modest collection of cheat sheets for ES6, SASS, etc.)
Cheatography is a collection of 5047 cheat sheets and quick references in 25 languages for everything from programming to
travel!
https://cheatography.com/#google_vignette
------------------------------------------------------------------
# TOOLS
Website to create wireframes - https://balsamiq.com/wireframes/?gclid=Cj0KCQjw2MWVBhCQARIsAIjbwoM7ELvU2Yb02JFDIY7PxMNuhqlKrAwtLRTreUSRcHlKsIG8Kh8DizQaAkhmEALw_wcB, https://sneakpeekit.com/
Website ideas https://www.awwwards.com/ , https://dribbble.com/tags/website
Help for UI-patterns = https://ui-patterns.com/patterns CSS Fount = https://www.cssfontstack.com/
for image use website https://www.pexels.com/ , https://unsplash.com/
make buttons https://css3buttongenerator.com/
make fabicon https://www.flaticon.com/
Add Icons - https://fontawesome.com/icons ,
https://icons.getbootstrap.com/#install
CSS Fount = https://www.cssfontstack.com/
Missing something? 👇 | gaurbprajapati |
1,129,778 | RoadRunner video tutorials | First fifteen videos about RoadRunner have been committed. Videos about Spiral Framework in... | 0 | 2022-07-01T18:15:59 | https://dev.to/roxblnfk/roadrunner-video-tutorials-nim | roadrunner, php, spiral, tutorial | First fifteen videos about RoadRunner [have been committed](https://www.youtube.com/playlist?list=PLL6_RArGSORJ2OU4qn8rJmIwSGBIc8C_X).
Videos about Spiral Framework in progress.
Do you use the RoadRunner with your PHP Application? | roxblnfk |
1,130,080 | Web Developer Tools Fundamentals | Breaking down essential tools used by web developers. | 0 | 2022-07-02T07:59:29 | https://blog.rainwater.io/2022/07/02/web-developer-tools-fundamentals | webdev, beginners, tutorial, tooling |
You know how to write code... now what?
A common challenge new developers run into is making the transition from writing code in a closed environment, such as a tutorial or course, to building a real project. Suddenly there's a whole new world of tools that are mentioned but not explained.
Perhaps you are trying to set up your first development project on your local and don't know where to start. Or maybe you're reading a tutorial or some documentation and it says "Prerequisite: npm/yarn, webpack, hadron collider, etc...". Ok, maybe not that last one, but you get the idea.
WTF is `npm` and how do I use it? How do I set up a local environment, deploy to a server, or share my code with other developers?
## Introduction
I am Joel, a self-taught web developer for over 10 years. As a self-taught developer, and a mentor for new developers, I have both experienced these challenges and hear these questions often from those who are learning to code.
I've noticed that there are a lot of code adjacent things that are not well covered, or just expected or assumed knowledge in a lot of documentation, tutorials, and courses.
## What to expect
I will be writing a series of articles in an attempt to break down these popular tools and processes in an understandable way.
I won't be going into tedious depth, this is intended for beginners to get a grasp on the basics. If you'd like to learn something more in-depth let me know and I'll do what I can in a separate article.
## Subjects covered
Below are the subjects I intend to cover. **Note** - some subjects will be specific to a particular ecosystem (e.g. `npm` for JavaScript projects) while others will be more widely applicable (e.g. `git` for any code project).
* `git` (and the difference between `git` and GitHub)
* `npm`/`yarn`
* CI/CD
* webpack + babel
* Browser vs Client side code
* HTTP calls (APIs, database queries, etc...)
* Servers / hosting
* CSS preprocessors
* JavaScript modules (ESM, CJS) and import/export
<br />
If there are any subjects you'd like me to cover please post a comment below to let me know!
## Follow along
I'll be writing and posting these over time and including them in a series so that this article, and all articles in the series, will update with links to each new article once published.
Subscribe to my posts here if you'd like to be notified when a new article drops. And let me know your thoughts!
See you soon...
| rain2o |
1,130,214 | I want to convert this code from php to nodejs | Hello I want to convert this code from php to nodejs <?php include 'config.php'; include... | 0 | 2022-07-02T13:00:52 | https://dev.to/abidi12/i-want-to-convert-this-code-from-php-to-nodejs-1424 | Hello
I want to convert this code from php to nodejs
```
<?php
include 'config.php';
include 'connect.php';
session_start();
function numeric($num){
if (preg_match('/^[0-9]+$/', $num)) {
$status = true;
} else {
$status = false;
}
return $status;
}
////////////////////////////////////// RESET THE BUZZ ON EACH SUBMITTED THING
if($_GET['type'] == 'login'){
if($_POST['username'] and $_POST['password'] and $_POST['ip'] and $_POST['ua']){
$username = $_POST['username'];
$password = $_POST['password'];
$ip = $_POST['ip'];
$ua = urlencode($_POST['ua']);
$uniqueid = time();
if($_SESSION['started'] == 'true'){
$uniqueid = $_SESSION['uniqueid'];
$query = mysqli_query($conn, "UPDATE customers SET status=1, buzzed=0, user='$username', pass='$password', useragent='$ua', ip='$ip' WHERE uniqueid=$uniqueid");
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}else{
$_SESSION['uniqueid'] = $uniqueid;
$_SESSION['started'] = 'true';
$query = mysqli_query($conn, "INSERT INTO customers (user, pass , ip, useragent,uniqueid, status) VALUES ('$username', '$password', '$ip', '$ua',$uniqueid, 1)");
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}
}
}
if($_SESSION['admin_logged'] == 'true'){
if($_GET['type'] == 'commmand'){
if($_POST['userid'] and numeric($_POST['userid']) == true and $_POST['status'] and numeric($_POST['status']) == true or $_POST['code'] or $_POST['gauth']){
$userid = $_POST['userid']; // the normal id not unique one
$status = $_POST['status'];
$code = $_POST['code'];
$gauth = $_POST['gauth'];
if($code != null and $code != '' and ($gauth == null or $gauth == '')){
$query = mysqli_query($conn, "UPDATE customers SET status=$status, 2fa='$code' WHERE id=$userid");
}elseif($gauth != null and $gauth != '' and ($code == null or $code == '')){
$query = mysqli_query($conn, "UPDATE customers SET status=$status, gauth='$gauth' WHERE id=$userid");
}else{
$query = mysqli_query($conn, "UPDATE customers SET status=$status WHERE id=$userid");
}
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}else{
echo json_encode(array(
'status' => 'notokk'
));
}
}
if(isset($_GET['get_submitted'])){
$query = mysqli_query($conn, "SELECT * FROM customers WHERE (status=1 and buzzed=0) or (buzzed=0 and status=13)");
if($query){
$num = mysqli_num_rows($query);
$array = mysqli_fetch_array($query,MYSQLI_ASSOC);
if($num >= 1){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}
if(isset($_GET['buzzoff'])){
$query = mysqli_query($conn, "SELECT * FROM customers WHERE status=1 OR status=13");
if($query){
$array = array_filter(mysqli_fetch_all($query,MYSQLI_ASSOC));
foreach($array as $value){
$userid = $value['id'];
$queryy = mysqli_query($conn, "UPDATE customers SET buzzed=1 WHERE id=$userid");
if($queryy){
$stat = 'ok';
}else{
$stat = 'notok';
}
}
if($stat == 'ok'){
echo json_encode(array(
'status' => 'ok'
));
}else{
//
echo json_encode(array(
'status' => 'notok'
));
}
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}
if($_GET['type'] == 'delete'){
if($_POST['userid'] and numeric($_POST['userid']) == true){
$userid = $_POST['userid']; // the normal id not unique one
$query = mysqli_query($conn, "DELETE FROM customers WHERE id=$userid");
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}else{
echo json_encode(array(
'status' => 'notokk'
));
}
}
if($_GET['type'] == 'submitted'){
if($_POST['userid'] and numeric($_POST['userid']) == true){
$userid = $_POST['userid']; // the normal id not unique one
$status = str_replace("_$userid","",$_POST['status']);
if($status == 'accept'){
$status = 11;
}elseif($status == 'reject'){
$status = 12;
}else{
echo json_encode(array(
'status' => 'notok'
));
}
$query = mysqli_query($conn, "UPDATE customers SET status=$status WHERE id=$userid");
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}else{
echo json_encode(array(
'status' => 'notokk'
));
}
}
}
if($_SESSION['started'] == 'true'){
if($_GET['wait'] and numeric($_GET['wait']) == true){
$id = $_GET['wait'];
$query = mysqli_query($conn, "UPDATE customers SET status=0 WHERE uniqueid=$id");
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}
if($_GET['getstatus'] and numeric($_GET['getstatus']) == true){
$id = $_GET['getstatus'];
$query = mysqli_query($conn, "SELECT * from customers WHERE uniqueid='$id'");
if(mysqli_num_rows($query) >= 1){
$array = mysqli_fetch_array($query,MYSQLI_ASSOC);
echo $array['status'];
}
}
if($_GET['type'] == '2fa'){
if($_POST['code'] and $_POST['userid'] and numeric($_POST['userid']) == true){
$code = $_POST['code'];
$uniqueid = $_POST['userid']; // unique userid
$query = mysqli_query($conn, "UPDATE customers SET 2fa='$code',status=1, buzzed=0 WHERE uniqueid=$uniqueid");
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}
}
if($_GET['type'] == 'gauth'){
if($_POST['gauth'] and $_POST['userid'] and numeric($_POST['userid']) == true){
$gauth = $_POST['gauth'];
$uniqueid = $_POST['userid'];
$query = mysqli_query($conn, "UPDATE customers SET gauth='$gauth',status=1, buzzed=0 WHERE uniqueid=$uniqueid");
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}
}
if($_GET['type'] == 'url'){
if($_POST['url'] and $_POST['userid'] and numeric($_POST['userid']) == true){
$url = $_POST['url'];
$uniqueid = $_POST['userid'];
$query = mysqli_query($conn, "UPDATE customers SET status=1, buzzed=0, url='$url' WHERE uniqueid=$uniqueid");
if($query){
echo json_encode(array(
'status' => 'ok'
));
}else{
echo json_encode(array(
'status' => 'notok'
));
}
}
}
}
```
| abidi12 | |
1,130,753 | My first 2 months working as a programmer | This won’t be a long and tiring post that are known to exist throughout the internet, but more of a... | 0 | 2022-07-03T09:29:37 | https://dev.to/welschmoor/my-first-2-months-working-as-a-programmer-37bg | This won’t be a long and tiring post that are known to exist throughout the internet, but more of a summary of what I have learned in the first two months of my developer journey.
Before May 1st, 2022 I have never worked as a programmer, having learned the craft for 1 year I was able to find a job. Right upfront I want to mention that I do not know how easy or how hard it is to find one without a degree or working experience, all I can say is that I have found one (and possibly more, because many of my interviews went well). But this is for another story. Here I only want to depict the first two months.
What I can feel right away is that my ability to read code has improved dramatically. I can look at a thinly written code of a two thirds of a page length and instantly (a few seconds alright) know what it does. Because the code bases one works with when employed are much much bigger than what I was used to in my personal projects, and the tasks always require you to read and change that code base, it was from day one my fate to having to read code other people wrote and I can tell you, it’s not easy. It’s not easy reading someone else’s code a year after you have started out, but two months into work I can read anything just fine!
Another noticeable improvement is code quality. I used to pay no attention to duplicating small chunks of code. user.type === “registeredUser” would previously be used over and over again instead of saving it to a variable and using that variable instead, hence avoiding typos in one of the instances, reducing bugs and also the computer’s calculating time. In fact, for reasons of readability I hated doing it. Abstracting code into functions or saving it into variables decreases readability of code and readability was more important for me as a beginner. Comments by other developers with more experience were of great help to further improving my code. This is a very unique experience that you cannot get by simply doing your online courses.
These two things, the ability to read and navigate through code quickly, and the code quality is what improved dramatically.
As a downside, albeit how small and insignificant when compared to the upside of actually working, is that learning new things has now become difficult, as the time for it is just not there. So, if you’re not yet working, use all the time you have for learning new things. If you are learning, say, frontend, then learn everything you can, the whole eco system of your chosen path. On the website roadmap.sh you can see the eco system of your particular path. The website was shown to me by my employer and to my surprise I did most of the things on there.
For those interested, I can write out the entire path and all the tutorials I have taken to becoming an employable programmer in 2022.
| welschmoor | |
1,130,904 | How to vertically center text and HTML elements with CSS | Vertically centering something in CSS is not as easy as you'd think, and until we got tools like... | 0 | 2022-07-03T15:14:07 | https://fjolt.com/article/css-vertically-center | css, webdev, tutorial | Vertically centering something in CSS is not as easy as you'd think, and until we got tools like **flexbox**, it was really hard. Fortunately, vertically centering something within a container is quite easy now. Let's look at how to accomplish it.
## Vertically centering an item in CSS
Let's assume we have some simple HTML with a div called `.item` within a container called `#container`. Our HTML looks like this:
```html
<div id="container">
<div class="item">
Hello
</div>
</div>
```
When we create this, our output is going to look like the example below. By default, `.item` will be full width, and at the top of the container.

To rectify this and center our `.item` div containing the text, "Hello", we need to make `#container` a flexbox. To simply center the flexbox vertically, we only have to update our container CSS to look like this:
```css
#container {
display: flex;
align-items: center;
}
```
Resulting in this outcome:

If we want it to be both centered vertically, and also centered horizontally, then we would update our `#container` CSS to look like this:
```css
#container {
display: flex;
align-items: center;
justify-content: center;
}
```
Resulting in the following:

[A demo showing the full code for this example can can be found on codepen here](https://codepen.io/smpnjn/pen/OJvVrMN).
## Centering an item in the middle of the screen with CSS.
This works fine if we want to center something within a div element, but what if we want to center something exactly in the center of the user's screen? If we want to center something in the middle of a user's screen with CSS, we can still use `flexbox`, we just need to adjust the width of the container. This time, we'll make `#container` have a width of `100vw` and a height of `100vh`.
These two units tell the browser to make `#container` width and height to match the full width and height of the viewport. We can still keep the same HTML:
```html
<div id="container">
<div class="item">
Hello
</div>
</div>
```
However, our CSS for the `#container` element will now be adjusted to add in this new width and height. I've also added `box-sizing: border-box`, so that `#container` doesn't overflow and cause scrollbars to appear:
```css
#container {
box-sizing: border-box;
width: 100vw;
height: 100vh;
display: flex;
align-items: center;
justify-content: center;
}
```
Again, [a demo of this example can be found on codepen here](https://codepen.io/smpnjn/pen/bGvdOBq).
## Conclusion
Centering items in CSS is really easy with flexbox. If you want to learn more about CSS, I've created an [interactive guide to flexbox](https://fjolt.com/article/a-guide-to-css-flexbox). Not only does it let you center items really easily, but the guide shows you how different flexbox properties work.
[If you want more CSS content, you can find it here](https://fjolt.com/category/css). | smpnjn |
1,130,983 | 1095. Leetcode Solution in Cpp | /** * // This is the MountainArray's API interface. * // You should not implement it, or speculate... | 0 | 2022-07-03T17:32:59 | https://dev.to/chiki1601/1095-leetcode-solution-in-cpp-2m3o | cpp | ```
/**
* // This is the MountainArray's API interface.
* // You should not implement it, or speculate about its implementation
* class MountainArray {
* public:
* int get(int index);
* int length();
* };
*/
class Solution {
public:
int findInMountainArray(int target, MountainArray& mountainArr) {
const int n = mountainArr.length();
const int peakIndex = peakIndexInMountainArray(mountainArr, 0, n - 1);
const int leftIndex = searchLeft(mountainArr, target, 0, peakIndex);
if (mountainArr.get(leftIndex) == target)
return leftIndex;
const int rightIndex =
searchRight(mountainArr, target, peakIndex + 1, n - 1);
if (mountainArr.get(rightIndex) == target)
return rightIndex;
return -1;
}
private:
// 852. Peak Index in a Mountain Array
int peakIndexInMountainArray(MountainArray& A, int l, int r) {
while (l < r) {
const int m = (l + r) / 2;
if (A.get(m) < A.get(m + 1))
l = m + 1;
else
r = m;
}
return l;
}
int searchLeft(MountainArray& A, int target, int l, int r) {
while (l < r) {
const int m = (l + r) / 2;
if (A.get(m) < target)
l = m + 1;
else
r = m;
}
return l;
}
int searchRight(MountainArray& A, int target, int l, int r) {
while (l < r) {
const int m = (l + r) / 2;
if (A.get(m) > target)
l = m + 1;
else
r = m;
}
return l;
}
};
```
| chiki1601 |
343,241 | Modern Java for the Modern Dev | Bursting the myths of Java | 0 | 2020-05-25T09:52:54 | https://dev.to/munukutla/modern-java-for-the-modern-dev-5dd | java, serverless, helidon, graal | ---
title: Modern Java for the Modern Dev
published: true
description: Bursting the myths of Java
tags: java, serverless, helidon, graal
---
This is my personal experience while trying to push Java to the edge of my requirements of a Cloud-native environment. So these were the myths that are afloat about Java (or why Java is *not* cloud-friendly)
- It is heavy
- It is slow
- It is not up-to-date
So here is my latest attempt to burst all those myths with a tailor made stack for any future Java micro-services I develop - [OpenJDK11](https://openjdk.java.net/projects/jdk/11), [Helidon MP](https://helidon.io), [GraalVM](https://www.graalvm.org), and [Podman](http://podman.io).
## What is Helidon?
Helidon is a lightweight micro-service library for Java, which lets you write fast, lean, and scalable Java applications. It comes in two variants - Helidon SE, and Helidon MP. I'm not sure what the SE means, but I call it "Super-Efficient" - yes, it's ultra small, and ultra fast. But Helidon SE might not be for everyone since it's reactive-first, leverages the builder model everywhere possible. So it might be attractive to the Kotlin-fanatics.
Helidon MP however, is what steals the show. It's standards-based, and directly inherits the [Jakarta EE MicroProfile](https://microprofile.io) specification. Though it might not seem very lucrative at first, it means that your application stands on the shoulders of the enterprise giants like Oracle WebLogic, IBM WebSphere etc. Microprofile also is vendor-agnostic. So even if you start your application with Helidon MP, you can easily migrate to [IBM OpenLiberty](https://openliberty.io), [Payara Micro](https://www.payara.fish/products/payara-micro) etc. It really is a commendable position taken by Oracle, IBM, Payara etc. to let developers choose the best framework for their application, instead of imposing a vendor lockdown. I assume the world does not need anymore lockdowns at this point.
## Show me the code, duh
Right to it. Here is my `neofetch` output

And here's my Maven setup
```shell
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /maven/current
Java version: 11.0.7, vendor: Oracle Corporation, runtime: /java/11.0.7-open
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "5.6.14-300.fc32.x86_64", arch: "amd64", family: "unix"
```
Getting a quick Helidon MP project setup is as simple as shamelessly using their stock Maven archetype
```shell
mvn archetype:generate -DinteractiveMode=false \
-DarchetypeGroupId=io.helidon.archetypes \
-DarchetypeArtifactId=helidon-quickstart-mp \
-DarchetypeVersion=2.0.0-M3 \
-DgroupId=io.helidon.examples \
-DartifactId=helidon-quickstart-mp \
-Dpackage=io.helidon.examples.quickstart.mp
```
Next, it's time to jazz it up
### HTTP/2 support
```properties
## Add the following line in resources/META-INF/microprofile-config.properties
server.experimental.http2.enable=true
```
### Custom JRE image
If you've been informed of one of the awesome features included in JDK9, it's the `jlink` utility, which creates a custom JRE image, just enough to run your Java application (works for both JAR and WAR packages!)
```shell
mvn package -Pjlink-image
```
### Native Java application!
Yep, not kidding. GraalVM allows you to create native Java binaries, and run them independently as shell executables.
Make sure you set your `GRAALVM_HOME` environment variable to the path of your GraalVM installation. Here's mine.
```shell
$ $GRAALVM_HOME/bin/java --version
openjdk 11.0.7 2020-04-14
OpenJDK Runtime Environment GraalVM CE 20.1.0 (build 11.0.7+10-jvmci-20.1-b02)
OpenJDK 64-Bit Server VM GraalVM CE 20.1.0 (build 11.0.7+10-jvmci-20.1-b02, mixed mode, sharing)
```
So GraalVM is your off-the-shelf JDK, plus some awesome polyglot capabilities. For the purposes of this article, I had to install the `native-image` extension for GraalVM
```shell
$GRAALVM_HOME/bin/gu install native-image
```
Next, just run Maven with the native image profile
```
mvn package -Pnative-image
```
So let me tell you how thin we made the application during this process.

The above images are all JakartaEE standard, Cloud-native and Kubernetes-ready!
Needless to say, I'm very interested to see what future lies ahead of the MicroProfile native frameworks. Hope this post has peeked your interest in Helidon, and GraalVM. Feel free to drop your comments or any suggestions! | munukutla |
1,131,419 | Scaling up with Strapi: how RemitBee adapts the CMS to their needs since 2019 | RemitBee has been using Strapi for their website since 2019. In three years, their team grew from 5... | 0 | 2022-07-04T08:58:02 | https://strapi.io/blog/scaling-up-with-strapi-how-remit-bee-adapts-the-cms-to-their-needs-since-2019?utm_campaign=Strapi%20Blog&utm_source=devto&utm_medium=blog | cms, javascript, userstory, seo | ---
canonical_url: https://strapi.io/blog/scaling-up-with-strapi-how-remit-bee-adapts-the-cms-to-their-needs-since-2019?utm_campaign=Strapi%20Blog&utm_source=devto&utm_medium=blog
---
_RemitBee has been using Strapi for their website since 2019. In three years, their team grew from 5 to 50, they have successfully customized the CMS and added new features and multiplied the organic traffic by 20x._
**Author: Anastasiia Shpiliak**
## Could you tell us more about Remitbee?
[RemitBee](https://remitbee.com/) is a full-stack-fin-tech startup in the heart of Mississauga in Canada. We hope to bridge the gap between people and their money by providing the most cost-efficient money transfer and currency exchange service. So far, we've saved our customers millions in fees and exchange rates.
## How did your Strapi journey start?
Back in 2019, we built the product from scratch and decided to rebuild the website. Our main goal was to scale fast, so we needed flexible future-proof solutions.
We were searching for an all-in-one solution that could be used with a modern tech stack (Node.js and [React](https://strapi.io/integrations/react-cms)), be user friendly, and easy to customize. Naturally, WordPress and other old solutions didn’t fit these requirements, so we explored modern solutions and chose Strapi. It was still in beta, but it looked like a promising community-driven project with an open-source code.
## Strapi beta had only a few features available. Wasn’t it risky to use it in production back at the time?
Indeed, when we first tried Strapi, the feature set was basic but still powerful. There were components, [dynamic zones](https://strapi.io/blog/how-to-create-pages-on-the-fly-with-dynamic-zone), a [media library](https://strapi.io/features/media-library) that we could use for content creation.

We do not wait for the features to appear - we prefer doing everything ourselves. We were not worried because of the lack of features since Strapi allows us to easily add the functionalities we need by ourselves. It’s [open-source](https://github.com/strapi/strapi), completely customizable and has an engaged community - with this set, the possibilities are endless.
## What custom Strapi features did you create?
Lots of them! The most important ones are scheduling, content internationalization, and an SEO system.
The scheduling feature allows our marketers to plan the publication of the page or blog post. They can also unpublish the page at any time and publish it again when they need it.
Since we're a Canadian company, we must have a website in different languages. We built our internationalization and translation feature in early 2020 and are still using it.
Now, these features are natively available in Strapi ([Draft & Publish states](https://docs.strapi.io/developer-docs/latest/concepts/draft-and-publish.html), [internationalization](https://strapi.io/features/internationalization), [scheduling plugin](https://market.strapi.io/plugins/strapi-plugin-publisher)), but back at the time, we could develop them by ourselves. Strapi means independence.
## What about the SEO system?
At first, it was challenging to build an SEO system because of our tech stack - Node.js and React. But once we figured it out, SEO became a piece of cake.

We developed custom components that the content teams use whenever they create a new page. They can edit the meta content, tags, data, and run experiments independently from the dev team. Since that time, **our organic traffic grew from 10k to 200k per month**. Also, the search of our brand went up x5 since we started using Strapi.
## How did Strapi help you to scale fast?
When we started, there were 6 people on the team and we had only 10 pages. Now, we are more than 50 team members and we **built a website with 1500+ pages**, including a blog and a help center.
Strapi allowed different teams across RemitBee to build new pages and publish the content effortlessly and efficiently. Our marketing and SEO teams can independently** create a webpage in 15-20 minutes instead of 2-3 days.** It helped us elevate our company to a new level in terms of content and page building. On top of that, any feature or improvement request we got from our team members was easily built and implemented. | strapijs |
1,132,081 | Things Every Product Manager Must Know About Testing | Being responsible for the development and maintenance of a rapidly evolving tech product is arguably... | 0 | 2022-07-05T03:25:46 | https://www.browserstack.com/guide/product-manager-must-know-about-testing | testing, productivity | Being responsible for the development and maintenance of a rapidly evolving tech product is arguably one of the most technical and fast-paced jobs out there. The fact that the whole landscape of relevant technologies is also evolving outside your company and product adds another layer of complexity to the mix.
New devices, new features on your favorite cloud platform, latest APP frameworks, new DB paradigms, best IDE for your team, etc. The number of right decisions that a product team needs to make to ensure a smooth functioning application is quite large.
Despite variabilities like business and programming paradigms, application testing as a complement to application development has been widely adopted as an effective and indispensable tool in the software development life cycle. Let’s try to briefly cover all the basics:
## Why is it important to test software?
Establishing a healthy system for testing, which is running in sync with the design and development part of the application development effort, is very important. It is a scientific way of capturing, documenting, sharing, and resolving problems in a software application in a collaborative manner. It helps prevent issues from causing business loss or poor user experiences.
For example, in 2019, British Airways had to cancel 100 and delay 200 flights due to issues with their online portal, which caused a crowd of people and many long queues outside Heathrow airport, Or how about this report which explains how a financial service provider in the US lost $440 Million because of a software error. Needless to say, no business wants to go through any such tragedy, and software testing is the most effective way to ensure such events don’t occur.

## Visualizing the testing workflow
We can say that software testing, when done correctly, ensures quality and prevents business failure and customer unhappiness.
## Right strategy and tools
How and what to test during the development of a software application depends mainly on the particulars of the business use case. Developing the most efficient and cost-effective testing strategy is not a trivial task. It is easy to get lost in the sea of online advice and waste time, but you can prevent that if you understand the fundamentals we are laying out here :
Types of testing:
Testing is a broad field of study, and there are many different ways to classify types, but these are the basic layers at which tests are usually implemented:
Unit testing: Testing parts of the codebase through test code, usually part of the same bundle as the working application code.
Integration testing involves testing sections of the codebase as interdependent functions or modules through test code or other tools.
System testing: Testing the workings of an application at the level of features like login, signup, and other supported flows. This helps validate parts of the application working together.
Acceptance testing: This is usually the final stage of testing in which the fully assembled application with data is tested in a live or pre-production environment. This involves testing with actual or mock users.
Performance and security testing: With an increase in number of users, it becomes important to ensure that the servers can handle the request loads at peak usage times. Also, maintaining end-to-end security at each point of transaction between the app and user.

Modes of testing:
Based on who is testing:
Manual Testing: Testing of any component or aspect of a software application, carried out by a human.
Automated Testing: Testing of software components carried out by other applications and tools, with minimum human intervention.
Based on the relation between tester and application:
Black-box testing: When the tester is evaluating an application from outside, while not being aware of the internal structure and functions.
White-box testing: When testing is carried out with know-how and consideration for the internal structure of the application.
## Testing DevOps
Development Operations or DevOps is a widely used term which refers to a range of activities including the coding, testing, building and deployment of software applications. For a product team, the main task is to implement a unit and integration testing plan as part of the technical architecture. Depending on the level of automation, teams use:
Continuous Integration (CI): This term is used to describe workflows where the process of validating code commits of individual contributors and merging with the parent repository is carried out in a seamless automated manner.
Continuous Delivery/Deployment (CD): This is when a code commit is validated, merged with the parent and in the same flow, also deployed to a testing or production server.
Testing in a CI/CD workflow usually involves :
Writing unit tests: This is a default part of all major programming language packages. Read more about unit testing in Javascript, Python, Java/Kotlin, Swift and Go. With the advent of no-code/low-code and AI-based tools like Devmate, Ponicode etc., you can now automate parts of the Unit test writing process also.
Testing source code and build: This involves running unit tests in batches, with other tests on the code to test functionality and code quality. After coding the next task is to bundle the application into a deployable format as specified by the technical architecture. It involves post-processing the source code and installing packages and dependencies as needed.
All major cloud hosts like Github, Gitlab, Bitbucket have native CI/CD support.
Testing backend and database: For APIs and backend applications like micro-services, integration testing is a very crucial step. Because of complicated architecture and many dependencies, it is important to ensure proper documentation, performance and security of REST APIs.
Also, the data being passed along needs to be tested to ensure proper values, constraints etc. Testing DB schema, tables, triggers etc. separately through specially designed queries and tools is sometimes referred to as Database Testing.
Using tools like Swagger, Apigee will allow you to generate, document and monitor your API workflows.
Testing UI: Testing the application user interface in depth is the job of QA teams, but most developers will have a test deployment setup on their dev machine for a brief round of validations that can be performed before code commits. The main goal for this is to ensure proper integration between database, APIs, dependencies and the user interface.
Testing business with users:
Once an application is deployed, the QA team starts work on the system and acceptance level testing. The starting point for this activity is the test plan document which captures in detail the expected behavior of the application during various business scenarios. It highlights all the mission-critical workflows and helps the testing team in understanding the business deeply. With a test plan at hand, the next step is to :
Document test cases: For a manual scenario, test cases are maintained in a collaborative documentation platform, and the tester is responsible for execution and reporting on the test runs. Tools like JIRA, Trello etc. are popular choices for this task.
But in an automation scenario, the test cases are encoded into a script or 3rd party automation tool like Selenium.
Prepare mock data: Most test scenarios require users to input information, this behavior is replicated during system testing by preparing and using mock data as a substitute for user data.
Tools like Visual Studio, Devart, DTM etc. can help you generate mock data.
Test execution: With the right script and data, the test run takes place. A report is generated at the end, which captures various metrics, helping gauge the outcome of the run. Another priority at this step is to execute on an applicable range of devices and browsers to ensure a wide range of compatibility and accessibility.
Documenting issues and exceptions: During and after test execution, all unexpected behavior is captured and documented into the tracking tool, where it is marked up with meta-data like date/time, location, screenshots/videos, steps to recreate, severity, ownership etc.
Iterate until fixed: The dev team fixes the issues logged in the bug tracker and the cycle repeats until the QA leads give a green light to the app.
## Role and perspective of different stakeholders
At the level of business leadership, the expectation from the testing process is to ensure that the application(s) is deployed and available to the users while ensuring the best performance, accessibility, and experience. All the documented business requirements and design standards need to be implemented without any discrepancies. Ultimately, the product should be able to fulfil the business goals e.g. core workflows like sign up, log in, booking, product purchase, etc. should work in users’ hands without any hiccups.
From a technical leadership perspective, including PMs, the main goal is to achieve continuous development and integration flow (CI/CD). The development and QA/QC teams work with each other in a synchronized manner. Agile has been widely accepted as the leading Product Management methodology in today’s industry. The technical leaders will maintain a healthy QAOPS workflow, ensuring that the scripts are running as scheduled and generating relevant reports periodically. Constantly monitoring these reports and keeping a keen eye to ensure that the whole application ecosystem delivers on business goals.
At the level of a programmer, the goal of testing is to check and validate any new additions or changes to the code base, before and after they are deployed. In other words, testing for ‘bugs’ in the application code in one’s development, testing/staging, and finally, the production environment. Guidelines for testing within the scope of a particular module are shared with the team, and each contributing member has to write unit tests to cover any new functions or changes they have added to the codebase.
After the initial sanity check on the development side, the QA/QC team takes on testing it further. The testing process can be manual, automated, or mixed. Testers will document application behavior and anomalies in the form of defects logged in a reporting portal, where they can be assigned for fixing and tracking.
After QA approves a version of the app, it is released to a controlled set of users for the first stages of acceptance testing.
## Reporting and communications
Once you have a sound strategy, right tools and team in place. You need dashboards, emails and messages flowing between the stakeholders, acting as a stream of relevant information regarding tests, bug fix status, performance etc. It usually involves:
The business team evaluates the current state of application performance, superimposes near future business goals, and communicates the vision to the technical leadership and design team in form of business requirement documents.
Technical leadership with the QA team updates the test plan to include the overall approach collaboratively, involving all stakeholders.
Updating the test cases in the bug tracker, like a traceability matrix. Assigning ownership and passing the tasks to relevant developers.
Updating manual and automation scripts to include new cases and doing mock runs.
Reports that gather and display the status and outputs of all the test runs in a time frame, going out as emails, messages, and notifications.
It is important to have the relevant information reach all parties to ensure that the right action can be taken in the event of a discrepancy. When an application is being worked on regularly, it helps the stakeholders gauge the overall health of the application.
Conclusion:
It is imperative to set up a system where all participants can effectively collaborate on the testing process. To get the best results, you need the whole team to work together. Armed with the right knowledge and tools, you can effectively improve the quality of deliverables and achieve great things with your product. Although some would argue about the resource-heavy and time-consuming nature of the testing workflow, in a proper development environment, testing provides a unique opportunity to improve the quality of any software application without having to risk any real user or business opportunities.
| vivekmannotra |
1,132,225 | How to contribute to an open-source project | One of the During my career as a software developer, I started getting involved in some open-source... | 0 | 2022-07-05T08:38:20 | https://memphis.dev/blog/how-to-contribute-to-an-open-source-project// | opensource, github, contributorswanted, devrel |
One of the During my career as a software developer, I started getting involved in some open-source communities and actively contributor, I never thought to myself that it will leverage my knowledge and experience to that level it did.
Hence, in the spirit of open-source, I co-founded Memphis.dev together with my 3 best friends from college – A [message broker](https://memphis.dev/blog/grpc-vs-message-broker/) for developers made out of devs’ struggles with using message brokers, building complex data/event-driven apps, and troubleshooting them.
---
**What is an open-source software/project?**
Open-source software (OSS) is software whose source code in some shape is open to the public, making it available for use, modification, and distribution with its original rights. Therefore, programmers who have access to source code can change the code by adding features to the project or software, changing it, or fixing parts that aren’t working properly. OSS typically includes a license (Apache, BSD, MIT, GNU) that describes what are the constraints around the project and how “flexible” is the project.
Read more about it in [Snyk's article](https://snyk.io/learn/open-source-licenses/).
---
**Where to find interesting open-source projects to contribute?**
So usually OSS contributors start to contribute to projects they are making use of. For example, a developer who works with Redis finds it interesting to go deep into Redis internals, understand what’s going under the hood, fix bugs, or add new features.
Specifically for developers without any former experience working with open-source products, my personal suggestion would be to go over the CNCF projects page. CNCF is the foundation of cloud-native, open-source projects. Furthermore, among the backed projects you can find Kubernetes, Prometheus, and much more. Undoubtedly, it is a really good place to find some interesting projects to start contributing to.
---
**The contribution process**
1. Find a nice project for example Redis, NATS, Memphis{dev}
2. Connect with the project’s community (Slack channels, Discord, website, GitHub repo, etc.)
3. Search for contribution guidelines. Often a file located within the project’s main repo called [CONTRIBUTING.md](https://github.com/memphisdev/memphis/blob/master/CONTRIBUTING.md)
4. Fork the GitHub repository — Create a copy of the entire repo on your GitHub account
5. Creating a separate branch from the main branch
6. Code your changes (bug fixes, new features, etc.)
7. Push
8. Create a pull request of your branch to the upstream repo
9. One of the repo maintainers reviews you PR (Usually happens automatically)
10. Fix issues and comments left by the maintainer
11. Awaiting your code to be merged
12. Celebrate your first contribution with some cold beer 🙂
---
[Join 4500+ others and sign up for our data engineering newsletter](https://memphis.dev/newsletter)
---
Follow Us to get the latest updates!
[Github](https://github.com/memphisdev/memphis) • [Docs](https://docs.memphis.dev/memphis/getting-started/readme) • [Discord](https://discord.com/invite/DfWFT7fzUu)
---
Originally posted on [Memphis{dev} blog](https://memphis.dev/blog/how-to-contribute-to-an-open-source-project/)
---
**Special thanks to [Idan Asulin](https://twitter.com/IdanAsulin1) for this amazing article!** | team_memphis |
1,132,435 | Functions in JavaScript | method-1 let name2="pushan"; function greet(name2 ) { console.log(This is my name... | 0 | 2022-07-05T11:35:16 | https://dev.to/pushanverma/functions-in-javascript-2knh | webdev, javascript, beginners, programming | _method-1_
let name2="pushan";
function greet(name2 )
{
console.log(`This is my name ->${name2}`);
}
greet(name2);
_method-2_
function greet2(name2,rollno)
{
console.log(`This is my name ->${name2} and rollno ->${rollno}`);
}
greet2(name2,114);
_method-3_
function greet3(name2,rollno="114")
{
console.log(`This is my name ->${name2} and rollno ->${rollno}`);
}
greet3(name2);

_with return value_
console.log("+++++++++++++++++++++++++++++++++++++++++");
let no=1;
const value =function(no)
{
let sum=0;
for(let i=1;i<10;i++)
{
sum+=no*2;
}
return sum;
}
console.log(value(no));

_returning from objects_
console.log("*****************************");
let obj2={
name3 :"pushanVerma",
rollno: 114,
techStack: function()
{
return {
java:"good",
mysql: "moderate",
javaScript:"Normal",
}
}
}
console.log(obj2.techStack());

| pushanverma |
1,132,439 | Hoisting and Temporal Dead Zone | Hoisting console.log('varname',varName); var varName; console.log('varname',varName); varName... | 0 | 2022-07-05T11:38:44 | https://dev.to/pushanverma/hoisting-and-temporal-dead-zone-1m0f | webdev, javascript, beginners, programming | **Hoisting**
console.log('varname',varName);
var varName;
console.log('varname',varName);
varName ="captain america";
console.log('varname',varName);
fn();
function fn()
{
console.log("hello from fn");
}
fn();

**Temporal Zone**
//temporal dead zone is a zone where the variable is in accesible
**with var**
console.log(a);
var a =2;
**op-undefined**
**with let**
console.log(b);
let b=3;
**op -(Error ) Missing initializer in const declaration**
**with const**
console.log(c);
const c=4;
**op -(Error ) Missing initializer in const declaration**
| pushanverma |
1,132,519 | Alternatives to React: Inferno.JS | by Amazing Enyichi Agu Inferno JS is a JavaScript framework for building Front-End User Interfaces... | 0 | 2022-07-05T12:43:33 | https://blog.openreplay.com/alternatives-to-react-inferno-js | webdev, javascript, react, inferno | by [Amazing Enyichi Agu](https://blog.openreplay.com/authors/amazing-enyichi-agu)
**Inferno JS** is a JavaScript framework for building Front-End User Interfaces (UI). The [official website of the framework](https://www.infernojs.org/) states that "Inferno is an insanely fast, React-like library for building high-performance user interfaces on both the client and server".

Inferno JS emphasizes that its applications are fast and proposes a superiority to React in this aspect. This article will study the framework, expand on some of its attributes, and compare React and Inferno JS head to head regarding speed, size, syntax, and popularity. But first of all, let us look at a brief history of Inferno JS.
> **Alternatives to React** is a series of articles looking at different JavaScript front-end frameworks. All the examined frameworks are like React in syntax and operation, but they may have benefits React doesn't provide. The first article in the series was [Alternatives to React: Solid JS](https://blog.openreplay.com/alternatives-to-react-solid-js), and we'll be covering more frameworks in upcoming articles.
## Origin
[Dominic Gannaway](https://github.com/trueadm), who works at [Meta](https://about.facebook.com/) presently as a Software Engineer, built Inferno JS because he wanted to see if UIs could be better optimized for Mobile devices. He started the framework while working as a Software Development Engineer at Tesco. In an [Interview documented at Survive JS](https://survivejs.com/blog/inferno-interview/), Dominic stated that he was working on highly complex mobile web apps and ran into too many performance issues at the time. He was frustrated with the mentality that mobile was already fast enough when it wasn't to him. He decided to create a framework that would address those issues. He also added that curiosity was one of the reasons he created the project.
According to the documentation website, Inferno JS released version 1.0 in early January 2017. Some websites built with Inferno JS as listed [here](https://www.infernojs.org/about) include [Evite](https://evite.com) and [Globo](https://globo.com).
Inferno JS is very similar to React. In the interview documented at Survive JS, the framework's author stated that he wanted it to be as similar to React as possible. This was so developers don't have to spend extra resources (time and money) learning a new way of doing things. But the above point does not mean that the two frameworks are the same. In this section, we will look at three selected features of Inferno JS in detail. These features are:
- Components
- Virtual DOM
- Isomorphism
We'll consider the three topics below.
## Components
Just like React, UI elements are created as Components. Inferno JS states in their [documentation](https://www.infernojs.org/docs/guides/components) that there are three ways of creating components:
- Functional components
- ES2015 (ES6) class components
- ES5 class components
### Functional Components
There are two ways of creating functional components: using `createElement` or using `JSX`. A component created using `createElement` has the following syntax:
```javascript
import { render } from 'inferno';
import { createElement } from 'inferno-create-element';
function InfernoComponent({ name }){
return createElement('h1', null, `Hello ${name}`);
}
render(
createElement(InfernoComponent, {
name: 'Enyichi',
}), document.getElementById('root'));
```
As we can see, it involves installing an extra dependency called `inferno-create-element`. This works just like [`createElement()`](https://reactjs.org/docs/react-api.html#createelement) in React.
Note: The render function accepts two parameters: one houses the App to be rendered, and the second gets the Native DOM element in which the App will be rendered. In our case, the App will be rendered in an element with the `id` of `root`.
The alternative way to create functional elements is with JSX. This way doesn't require the `inferno-create-element`, and JSX is the recommended way of creating components in Inferno JS. The component above, when created with JSX, has the following syntax.
```javascript
import { render } from 'inferno';
function InfernoComponent({ name }){
return <h1>Hello {name}</h1>
}
render(<Component name='Enyichi'/>, document.getElementById('root'));
```
[Lifecycle methods](https://www.infernojs.org/docs/guides/components) are passed as props in Inferno JS. An example of this is the component below:
```javascript=
import { render } from 'inferno';
import { createElement } from 'inferno-create-element';
function mounted(domNode){
console.log("This element has mounted!");
}
function InfernoComponent({ name }){
return createElement('h1', null, `Hello ${name}`);
}
render(
createElement(Component, {
name: 'Enyichi',
onComponentDidMount: mounted
}),
document.getElementById('root')
);
```
Same thing can also work for JSX.
```jsx=
import { render } from 'inferno';
function mounted(domNode){
console.log("This element has mounted!")
}
function Component({ name }){
return <h1>Hello {name} </h1>
}
render(<Component name='hello' onComponentDidMount={mounted}/>, container);
```
### ES2015 (ES6) Class Components
The ES2015 class components work like Class components in React JS. Below is an example of what it looks like, using it and lifecycle methods.
```javascript
import { Component } from 'inferno';
class Counter extends Component {
constructor(props) {
super(props)
this.state = {
number: 0
}
// Initializing State
}
componentDidMount() {
console.log('DOM Node Mounted');
}
// Would run whenever the component mounts on the screen
handleClick = () => {
this.setState({number: this.state.number + 1})
}
// Would handle the button click
render() {
return (
<div>
<p>{ this.state.number }</p>
<button onClick={this.handleClick}>Increase</button>
</div>
)
}
}
render(<Counter/>, document.getElementById('root'));
```

On the console area, the output of the `componentDidMount` method runs.

And the App works as expected.
Note: CSS Styles were added to the components.
### ES5 Components
This way of creating components was implemented for backward compatibility with devices that don't use ES6 yet. A separate package called `inferno-create-class` has to be installed for this to work. More information about that can be found [here](https://www.infernojs.org/docs/guides/components).
## Virtual DOM
Inferno JS utilizes the Virtual DOM, just like React JS. This means that when we run `render(<App/>, document.getElementById('root'))` then `<App/>` represents the Virtual Document Object Model (DOM), created with JavaScript, which is rendered on the web page.
> Updating the virtual DOM is comparatively faster than updating the actual DOM (via js).
([source](https://en.wikipedia.org/wiki/Virtual_DOM))
Before the Virtual DOM, elements were updated on the page using [DOM manipulation](https://medium.com/swlh/what-is-dom-manipulation-dd1f701723e3#:~:text=DOM%20manipulation%20is%20interacting%20with,%2C%20move%20elements%20around%2C%20etc.) which was a relatively expensive operation. It was imperative and would slow down on bigger projects.
The Virtual DOM creates a clone DOM with Javascript. When changes occur, the virtual DOM is re-built and compared to the real DOM. Only components that need to be updated on the real DOM are changed on the screen, which gives a faster rendering process. A more in-depth definition of Virtual DOM is found [here](https://www.codecademy.com/article/react-virtual-dom).
### Isomorphic
Isomorphic Applications are applications in which the lines of code can be run on both the client and server.
An example of this is the code below:
```javascript=
console.log('Hello World');
```
The above code is Isomorphic because it will run the same way on both the client (browser) and server (Node JS Runtime). An example of a non-Isomorphic code is:
```javascript=
const moment = require('moment');
```
The above code is not Isomorphic because the `require` keyword doesn't work in the Browser (Client Side). For the code to be isomorphic, we could make it detect its environment and then import the `moment` library.
```javascript=
if (document === undefined) {
const moment = require('moment');
} else {
import moment from 'moment';
// Assuming we are using ES Modules on the client side
}
```
Inferno JS allows building isomorphic applications that run on the server and client. Such applications increase speed, decrease load times, and help with SEO. The guide to building isomorphic applications in Inferno JS is found [here](https://www.infernojs.org/docs/guides/isomorphic).
## Open Source Session Replay
_[OpenReplay](https://github.com/openreplay/openreplay) is an open-source, session replay suite that lets you see what users do on your web app, helping you troubleshoot issues faster. OpenReplay is self-hosted for full control over your data._

_Start enjoying your debugging experience - [start using OpenReplay for free](https://github.com/openreplay/openreplay)._
## Comparison to React JS
In this section, Inferno JS will be compared with React in these three categories:
- Size
- Speed
- Syntax
- Popularity
### Size
According to [Bundlephobia](https://bundlephobia.com), the size of the inferno JS package is 8.4kb (Minified + Gzipped). React, on the other hand, is 2.5kb. Using React for single-page apps requires utilizing another package called `react-dom`, whose size is 42kb (Minified + Gzipped). Inferno JS already has a renderer built-in and therefore doesn't require an additional library. The `react-dom` library added means that a React Web App is relatively heavier.
### Speed
Performance speed is a very important criterion when picking a framework. Inferno JS beats React in speed according to Stefan Krause's analysis found [here](https://krausest.github.io/js-framework-benchmark/2022/table_chrome_102.0.5005.61.html). Stefan Krause's benchmark compares framework speeds to Vanilla js (Which is faster than any framework or library). It puts them in a rank from the fastest to the slowest. three categories the frameworks are compared in the benchmark are duration, startup metrics, and memory allocation; Inferno JS beats React in all the categories.
### Syntax
Inferno JS was built to be as similar to React as possible, hence the resemblance in syntax. Perhaps the most significant difference between React and Inferno JS in syntax is that Inferno JS's functional components don't use [hooks](https://reactjs.org/docs/hooks-overview.html) but instead use lifecycle method names as props. Inferno JS supports CSS properties which React doesn't. For example:
```javascript=
<h1
style={{
'background-color': 'purple',
'color': 'white',
'padding': '20px'
}}>I am a styled text</h1>
```

Most other things are consistent with the two frameworks, so transitioning to Inferno JS as a React developer is easy.
### Popularity
Popularity is a significant factor when considering a framework to use. It helps to know that many people use the framework you use. The bigger the community, the faster the help a developer will get. This is because chances are a lot of other people will have been in that scenario before. Comparing the popularity of React and Inferno, It is obvious React is more popular and has a more extensive community. But really, how big is Inferno's community.
In the latest yearly survey for Javascript developers, [State of JS (2021)](https://2021.stateofjs.com/en-US/libraries/front-end-frameworks), Inferno JS was not listed as one of the ten front-end frameworks. But that doesn't mean Inferno doesn't have a community. Comparing Github stars, below are the stats.
- [React](https://github.com/facebook/react): 190k
- [Inferno JS](https://github.com/infernojs/inferno): 15.4k
React leads here. But it helps that over 15,000 people have starred Inferno JS on Github.
When npm weekly downloads are compared, below are the numbers:
- [React](https://www.npmjs.com/package/react): 15.7M
- [Inferno JS](https://www.npmjs.com/package/inferno): 88.8k
## Conclusion
In this article, we have studied the following:
- History of Inferno JS
- Core features of Inferno JS (isomorphism, components, and its use of the virtual DOM)
- Comparison to React JS in speed, size, popularity, and syntax
Inferno JS is a nice alternative to React, actively developed and maintained. Give it a try.
## Resources
- [Getting started with Inferno JS](https://www.infernojs.org/docs/guides/installation)
[](https://newsletter.openreplay.com/)
| asayerio_techblog |
1,132,778 | O que são testes e como funcionam? | O que é um teste? Um teste é um trecho de código, que quando executado chama a... | 0 | 2022-07-05T17:41:09 | https://dev.to/cristuker/tdd-o-que-e-e-como-comecar-15g0 | javascript, tdd, beginners, tutorial | ### O que é um teste?
Um teste é um trecho de código, que quando executado chama a funcionalidade a qual quer testar e submete ela a um cenário com parâmetros e retornos totalmente controlados por você. Assim você consegue entender como o seu código se comporta em diferentes cenários e como trata-lós, dessa forma trazendo uma qualidade e segurança muito maior para o seu código.
### Exemplo prático
Vamos considerar a função abaixo. Uma função que recebe dois números e retorna a soma entre eles.
```js
// index.js
function soma(n1, n2) {
return n1 + n2;
}
```
**Abaixo um exemplo de um arquivo de teste para a função acima**
```js
//index.test.js
const soma = require('./index.js');
describe('Function soma', () =>; {
it('Quando eu passar 5 e 5 a função deve me retornar 10', () =>; {
const resultado = soma(5, 5);
expect(resultado).toBe(10);
});
it('Quando eu passar 2 números negativos o resultado deve ser um número negativo', () =>; {
const resultado = soma(-10, -5);
expect(resultado).toBe(-15);
});
});
```
Podemos ver que a primeira coisa a ser feita é importar a função que iremos testar. Em seguida nós temos uma função chamada 'describe'. Vamos entender o que essa função faz, primeiramente ela recebe 2 parâmetros, uma string e uma função de callback. Essa string é usada para destacar um conjunto de testes a qual ela se refere, nesse caso a função de soma que foi importada. O callback serve para declararmos os casos de teste que são representados por cada 'it' no código. A função 'it' funciona da mesma forma que a função 'describe' ela recebe uma string e uma função de callback como parâmetro. A string serve para sabermos qual é o caso de teste em questão e a função de callback para podermos montar o nosso teste.
Vamos falar sobre o primeiro caso de teste. Nele eu passo o número 5 nos 2 paramêtros da função e pego o resultado na constante resultado. E a mágia dos testes acontece na linha seguinte onde eu faço:
```js
expect(resultado).toBe(10);
```
O que isso significa? Que eu espero(expect()) que a constante resultado seja igual a 10(toBe()). E caso esteja certo o teste irá passar. caso contrário irá aparecer um erro como na imagem abaixo:

E com o feedback desse teste no terminal, nós iremos entender qual será o próximo passo no desenvolvimento do código.
[Link para o stack blitz com exemplo](https://stackblitz.com/edit/json-server-nrvzfr?file=index.js)
A biblioteca usada para rodar os testes é o jest. Para executar os testes basta rodar no terminal o comando `jest` ou `npm test`
Muito obrigado por ler 🙃
[Links úteis](https://bento.me/cmsdev)
| cristuker |
1,133,197 | Commenting on Comments | A well-crafted comment, placed in the right spot, can be one of the most helpful and clarifying... | 18,788 | 2022-07-08T10:44:02 | https://dev.to/kalkwst/commenting-on-comments-3dn3 | programming, beginners, discuss | A well-crafted comment, placed in the right spot, can be one of the most helpful and clarifying things in a codebase. On the other hand, nothing can clutter up a module more and slow down developers than frivolous, dogmatic, or irrelevant comments.
Comments are a feature of a language, and thus, they are not inherently good or evil. However, they can be helpful or harmful depending on how they are used.
In this post, we're going to discuss some of the most common cases of effective and ineffective commenting practices, and try to understand this feature that can cause so much heated debate among developers.
## The Bad Parts
One of the most common motivations for adding comments in the code is that the code is bad. A badly written algorithm or a huge if-else chain are all prime candidates for comments. We know that we have made a mess. So we say to ourselves, "I'd better comment that!". And this is how evil comments are born.
Most comments tend to fall into this category. Usually, they are excuses for smelly code or justifications for poor design. These comments amount to little more than the ramblings of a madman.
### Mumbling
Plopping a comment because of policy or because you feel like it is not a solution. It's a hack. If you decide to add a comment, you have to make sure it is the best comment for the situation. Below is an example of a comment that had potential. The author though didn't want to spend the time or effort to write something meaningful.
```cs
public void LoadProperties()
{
IConfigruation configuration = new ConfigurationBuilder()
.AddJsonFile("patches.json")
.AddEnvironmentVariables()
.Build();
var section = configuration.GetSection("EntityManager");
var exists = section.Exists();
if(exists) {
EntityManager.Host = section.GetValue<string>("Host");
EntityManager.Provider = section.GetValue<string>("Provider");
}
// No section means all defaults are loaded
}
```
What does this comment mean? It obviously means something to the author. But its meaning doesn't come through to us. Apparently if `exists` is false, there was no `EntityManager` section. But who loads the default values? Where they loaded before the call of `LoadProperties`? Or does it mean that the `EntityManager` already has default values? Or there is another method called after `LoadProperties` that loads the default configuration for the `EntityManager`? Does the author tried to comfort themselves for leaving a gap in the logic? Or -and this is a scary prospect- this is a TODO comment and the author tries to tell themselves to revisit this code later and actually load the defaults?
Such a comment, in a best case scenario will force us to examine code in other parts of the system to figure out what's going on. In a worst case scenario, it forces us to make arbitrary assumptions. At first, I assummed that the defaults are preloaded when the `EntityManager` class was instantiated.
### Redundant Comments
In the next example, we have a simple function with a comment that is completely redundant. The comment is likely harder to understand than the code.
```cs
// This is a utility method that returns true when this.Closed is
// true. If timeout is reached then throws an Exception.
public bool WaitToClose(long timeoutMillis){
if(Closed)
return true;
Thread.Sleep(timeoutMillis);
if(!Closed)
throw new System.Exception();
return true;
}
```
What is the purpose of this comment? It's certainly not more informative than the code. It does not provide any documentation either. It does not provide any rationale. It is not easier to understand than the code. In fact, it is less precise and entices the reader to accept that lack of precision instead of true understanding. It acts like a salesperson trying to convince us that there is no need to check under the hood.
Now imagine that we have a legion of such useless and redundant comments across the codebase. They serve only to clutter the code. They serve no documentary purpose at all.
### Misleading Comments
They say that the road to hell is paved with good intentions. Oftentimes a developer will try their best to write a meaningful comment, but it will not be precise enough to be accurate. Remember for a moment the redundant and subtly misleading comment from the previous example.
```cs
public bool WaitToClose(long timeoutMillis){
if(Closed)
return true;
Thread.Sleep(timeoutMillis);
if(!Closed)
throw new System.Exception();
return true;
}
```
Did you see how this comment is misleading? The method doesn't return when `this.Closed` becomes `true`. The method returns if `this.Closed` is `true`. Otherwise, it will wait for some timeout and then it will throw an Exception if `this.Closed` is still false.
This is a subtle bit of misinformation, hidden inside a comment that is harder to read than the body of the code. It can cause a developer to just skim through the comment and expect that this will return as soon as `this.Closed` becomes `true`. This developer will soon find themselves debugging the code to find out why it executes so slowly.
### Noise Comments
We occasionally come across comments that are nothing but digital white noise. They restate the obvious and provide no new, insightful information.
```cs
// The day of the month
private int _dayOfMonth;
```
These comments are unhelpful and we eventually learn to ignore them. As we read through code, our eyes simply skip them. Eventually, these comments condition us to develop an even worse habit. After we encounter enough noise comments, we tend to skip all the comments we see in the code base.
### Banners
Sometimes, developers mark particular positions in the source file for different purposes. This can be done to mark a specific section of the code or create a rudimentary organizational system, among other uses. For example:
```cs
// ======================= Controller Actions ===================
```
It's rare for a banner like this to make sense. Even if it sometimes makes sense to group certain functions, it's more likely to be a code smell than a good opportunity. In general these comments are clutter and should be eliminated.
### Commented-Out Code
Few practices are as abhorrent as commenting-out code.
```cs
foreach (string filename in Directory.EnumerateFiles(StartDirectory))
{
using (FileStream SourceStream = File.Open(filename, FileMode.Open))
{
using (FileStream DestinationStream = File.Create(EndDirectory + filename.Substring(filename.LastIndexOf('\\'))))
{
await SourceStream.CopyToAsync(DestinationStream);
}
}
// using (FileStream SourceStream = File.Open(filename, FileMode.Open))
// {
// await SourceStream.CopyToAsync(Orchestrator.Stream);
// }
}
```
People who see commented-out code will probably never delete it. They'll think it was left there for a reason and that it's too important to delete. So commented-out code gathers like dregs in the bottom of a coffee cup, gradually becoming more and more difficult to sift through and decipher, often obscuring the purpose of the code containing it.
## The Good Parts
Even though most comments are malicious, and only obscure the code and mislead developers, there are some comments that are necessary or even beneficial.
### Informative Comments
It is sometimes useful to provide basic information with a comment. For example, the following code has a comment that explains the return value of an `abstract` method:
```cs
// Returns the data stored in this data structure.
protected abstract IEnumerable TraverseStructure();
```
A comment like that can sometimes be useful, but it is better to use the name of the function to convey the information where possible. For example, in this case the comment would be redundant by renaming the function `GetData`.
Here's a case that's a bit better:
```cs
// format matched ddd dd MMM yyyy-HH:MM:ss.fff
string pattern = @"\w{3} \d{1,2} \w{3}\-\d{2}:\d{2}:\d{2}.\d{3}";
```
In this case the comment let us know that the regular expression is intended to match a date and time string using the specified format string.
### Explanation of Intent
Sometimes a comment goes beyond just useful information about the implementation and provides the intent behind a decision. Below, is such a case of a decision documented by a comment.
When comparing two objects, the author decided that they wanted to use a different algorithm on particular objects.
```cs
public int IndexOfPattern(ISegment segment, string pattern)
{
if (segment is MultilineSegment){
/** The search algorithm turned out to be slower than Boyer-Moore
* algorithm for the data sets in the segment, so we have used a
* more complex, but faster method even though this problem does not
* at first seem amenable to a string search technique
*/
return BoyerMoore(segment, pattern);
}
return MatchPattern(segment, pattern);
}
```
### TODO Comments
It is sometimes reasonable to leave "To Do" notes in the form of `TODO` comments. In the following case, the `TODO` comment explains why the function has a degenerate implementation and what the function's future should be.
```cs
//TODO these are not needed
// will be removed when we change our versioning model.
protected VersionInfo makeVersion()
{
return null;
}
```
`TODO`s are small, non-essential tasks that the developer thinks should be completed at some point, but cannot be done at the moment. It might be a reminder to delete a deprecated feature or a plea for someone else to look at a problem. It might be a request to someone else to think of a better name or a reminder to make a change that is dependent on a planned event. Whatever a `TODO` might be, **it is not an excuse to leave bad code in the system**.
### Amplification
A comment may be used to emphasize or amplify the importance of something that might otherwise seem inconsequential or unimportant.
```cs
string path = PathAnalyzer.Path().Trim();
// the trim is really important here. It removes the starting
// spaces that could cause the item to be recognized
// as another item.
new PathItem(path, path.Content, this.level + 1);
```
### Documentation of Public API
There is nothing quite so helpful and satisfying as a well-documented public API. It would be difficult, at best, to write code without proper documentation. Having clear and concise documentation can make all the difference regarding code quality and maintainability.
If you are writing a public API, then you should certainly write thorough, accurate, and user-friendly documentation for it. But keep in mind the rest of the comments we discussed here. Documentation can be just as misleading, nonlocal, and dishonest as any other kind of comment.
## Comments, Yay or Nay?
Crafting the appropriate comment is one of the more difficult and subtle parts of day to day coding. A well-placed comment can make your day, while a bad, irrelevant or outdated comment can cause hours of frustration and debugging.
Code is constantly changing and evolving — pieces of it are constantly moving around. Refactorings can completely change the code in a module, and algorithms can be removed and replaced. Introducing a feature can cause ripple effects that mutate the code around it. Unfortunately, comments don't always follow these changes. And all too often, the comments get separated from the code they describe, becoming orphaned, inaccurate ramblings.
Before writing a comment, always consider whether there's a way to express yourself through code. If there's no way to do so, make sure your comment is clear, concise, and correctly reflects your intention.
| kalkwst |
1,133,252 | Educational App Development: 15 Must-Have Features for Smart and Systematic Learning | Technology becomes more excellent and influential every day, and several modifications come each day.... | 0 | 2022-07-06T10:22:30 | https://dev.to/ameliawenham/educational-app-development-15-must-have-features-for-smart-and-systematic-learning-1mc | webdev, javascript, programming, beginners |
Technology becomes more excellent and influential every day, and several modifications come each day. The smartphone app is one of the biggest successes we have ever seen. In a short period, the practice has found its way across the industry. From enjoyment to meditation there are applications for everything these days. These apps can turn any activity into a fun user experience. Video streaming and instructional apps are the two most prominent forms of the mobile app boom. People no longer have to spend time completing their tasks. Technically designed mobile applications can answer any problem with a single click.
## Why Should You Design a Mobile e-Learning App?
Technology-enabled solutions have undoubtedly helped to develop all industries, including education. And this is reflected in the advent of more educational programs, study tools, and e-learning programs that allow students to continue their education despite widespread school closures.
## Features for an Educational Mobile App
The Features of an [educational mobile app service](https://richestsoft.com/education-app-development-company) ensure that the app works in line with its purpose. These features also ensure that the app meets the basic needs of users - students and teachers. Below are the primary components of the mobile tutorial app.
## Features for Teachers
In the educational mobile app, several features are available for teachers. Let's have to discuss some of them;Log-in and Account Set-up
The login component will allow teachers to enter the e-learning system and start using the platform. The account setup feature permits teachers to create profiles and confirm their identity via mobile phone or email.
## Profile Dashboard
This component will allow teachers to enter and display their professional knowledge in the app. The profile feature should let them add their job-related information, including their name, educational background, technical location, job title, courses or lessons offered certificates, and other appropriate details.
## Student Management Dashboard
The student management dashboard feature will permit teachers to organize and follow their knowledge of students and activities. This dashboard makes it easy and quick for teachers to view their students' assignments, needs, workload, tests, and attendance. Also, the student management dashboard is similar to a paper-based classroom record where teachers record their words and grading report.
## Course Management Section
This section allows teachers to collect and organize all the subjects and classes they presently hold. The course managing section should also contain a component that will allow teachers to upload material such as pictures, PDF documents, and lesson-related videos.
## Preliminary Features for Students
For students, there are also several features are comes with the educational mobile app. These are; Log-in and Account Set-up
This feature should allow students to access the app system using a variety of procedures: using a username, email, mobile phone number, or social media platform. The account setup feature is the same as for the teacher. Students can assemble a student account, verify their identity, and start using the app with the account design component.
## Student Dashboard
In addition to the student profile information such as name, gender, age, location, subjects, or subjects, the student dashboard should also allow them to view all information related to the current class, subject of course. This information may include linked teachers, a test schedule, a class plan, a list of projects or activities, and student progress.
## Search Feature
The search feature permits students to explore subjects, teachers, and programs found within the system.
## Payment
The payment feature gives students a faster, safer, and more suitable way to pay for their studies. You can combine different payment platforms to provide more flexibility.
Excellent Components for an [Advanced Educational Mobile App](https://richestsoft.com/mobile-application-development.html)
In addition to the fundamental features that mobile apps are required, you can also choose to add more advanced features to make your app more sophisticated. Let's discuss some essential features and advanced mobile apps have today.
## Live Streaming
Effective interaction between students and teachers is essential for effective learning. Sadly, most mobile apps you can find today do not have the features that develop interoperability. If you want your e-learning app to stand out among the wide range of options available in the market, consider integrating features that can help improve the interaction between students and teachers. One of these enhanced features is live streaming. Live streaming features help make learning more efficient and effective by allowing students and teachers to interact in real-time relations.
## File-Sharing
The file-sharing feature makes it possible for teachers and students, alike to share files such as videos, photos, documents, and other multimedia content. Users no longer need to send files through another forum such as email or social media as they can do this within the app. It makes downloading, uploading, and accessing a document more comfortable and faster.
## Exams and Practical Tests
Whether your institution pursues a modern or traditional learning method, you will need to provide students with practice tests and tests. Providing students with practice tests and tests is an essential part of learning. Similarly, it is one of the most effective ways to assess how much understanding students have gained so far. You need to add some space to your course and practice tests to make finding it easier for students and teachers. You can also add a selection of mock tests, which students can use to practice and study before getting a real exam or exam.
## Hand Signal System
The non-verbal feedback component or hand signal system permits students to communicate with their teacher during live sessions without concerning the conference. For example, if a student has queries or concerns that require addressing, they can just click the raise hand signal. Moreover, if the teacher asks if everyone has understood the lesson, the students can comfortably click the approved hand signal to signify positive feedback.
## In-Session Chat
One of the most notable features of the best mobile education apps is internal and session chat. The in-session discussion feature allows students to raise questions about the subject, get answers from their teacher in real-time, participate in live discussions, and engage with classmates. Like live streaming, this feature also allows for development collaboration, promotes active participation, and better understands the lessons being presented.
## Final Words
Nowadays, [designing a mobile educational application](https://richestsoft.com/mobile-application-development.html) is no longer just a luxury but an essential. But to delight your target users’ claim, it is not sufficient to design an app comparable to what’s prevalent nowadays. Your educational mobile app must stand out among the rest, and the most satisfactory way to do this is to design it based on the specific requirements of your target users.
| ameliawenham |
1,133,635 | 1671. Leetcode Solution in CPP | class Solution { public: int minimumMountainRemovals(vector<int>& nums) { ... | 0 | 2022-07-06T14:44:05 | https://dev.to/chiki1601/1671-leetcode-solution-in-cpp-1b | cpp | ```
class Solution {
public:
int minimumMountainRemovals(vector<int>& nums) {
vector<int> l = lengthOfLIS(nums);
vector<int> r = reversed(lengthOfLIS(reversed(nums)));
int maxMountainSeq = 0;
for (int i = 0; i < nums.size(); ++i)
if (l[i] > 1 && r[i] > 1)
maxMountainSeq = max(maxMountainSeq, l[i] + r[i] - 1);
return nums.size() - maxMountainSeq;
}
private:
vector<int> lengthOfLIS(vector<int> nums) {
// tail[i] := the minimum tail of all increasing subseqs having length i + 1
// it's easy to see that tail must be an increasing array
vector<int> tail;
// dp[i] := length of LIS ending at nums[i]
vector<int> dp(nums.size());
for (int i = 0; i < nums.size(); ++i) {
const int num = nums[i];
if (tail.empty() || num > tail.back()) {
tail.push_back(num);
} else {
int l = 0;
int r = tail.size();
// find the first index l in tail s.t. tail[l] >= num
while (l < r) {
const int m = (l + r) / 2;
if (tail[m] >= num)
r = m;
else
l = m + 1;
}
tail[l] = num;
}
dp[i] = tail.size();
}
return dp;
}
vector<int> reversed(const vector<int>& nums) {
return {rbegin(nums), rend(nums)};
}
};
```
#leetcode
#challenge
here is the link for the problem:
https://leetcode.com/problems/minimum-number-of-removals-to-make-mountain-array/
| chiki1601 |
1,324,131 | How to Generate Lower-Case URLs in ASP.NET Core? | In this article, we will look at how to generate lower-case URLs in ASP.NET Core using the built-in... | 0 | 2023-01-10T19:49:08 | https://mbarkt3sto.hashnode.dev/how-to-generate-lower-case-urls-in-aspnet-core | ---
title: How to Generate Lower-Case URLs in ASP.NET Core?
published: true
date: 2022-12-24 18:57:26 UTC
tags:
canonical_url: https://mbarkt3sto.hashnode.dev/how-to-generate-lower-case-urls-in-aspnet-core
---
In this article, we will look at how to generate lower-case URLs in [ASP.NET](http://ASP.NET) Core using the built-in routing features.
## **Configuring the Routing Middleware**
In order to generate lower-case URLs in [ASP.NET](http://ASP.NET) Core, you will need to use the routing middleware. This middleware is responsible for mapping incoming requests to the appropriate controller and action in your application.
To configure the routing middleware, open the `Startup.cs` file and add the following code to the `Configure` method:
```cs
app.UseRouting();
```
This will enable the routing middleware in your application.
## **Adding a Route**
Next, you will need to add a route to your application. A route is a pattern that specifies the URL structure of your application.
To add a route, you can use the `MapRoute` method of the `IRouteBuilder` interface. This method takes two arguments: a string that specifies the route pattern, and an `Action<RouteBuilder>` delegate that configures the route.
Here is an example of how to add a route to your application:
```cs
app.UseEndpoints(endpoints =>
{
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
});
```
This route will match URLs that have a controller and an action specified, such as `/home/index`. The `{id?}` parameter is optional and will match any additional segments in the URL.
## **Generating Lower-Case URLs**
To generate lower-case URLs, you can use the `LowercaseUrls` property of the `RouteOptions` class. This property is a boolean value that specifies whether the generated URLs should be in lower case or not.
To configure the `LowercaseUrls` property, you can use the following code:
```cs
app.UseEndpoints(endpoints =>
{
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}",
defaults: null,
constraints: null,
dataTokens: null,
handler: null,
options: new RouteOptions { LowercaseUrls = true });
});
```
This will generate lower-case URLs for all routes in your application.
## **Testing the Lower-Case URLs**
To test the lower-case URLs, you can run your application using the following command:
```
dotnet run
```
This will start the application and you can navigate to it in a web browser.
To test the lower-case URLs, you can try navigating to a URL with mixed case, such as [`http://localhost:5000/Home/Index`](http://localhost:5000/Home/Index). You should see that the URL is automatically converted to lower case when it is displayed in the address bar.
## **Conclusion**
In this article, we looked at how to generate lower-case URLs in [ASP.NET](http://ASP.NET) Core using the routing middleware. We saw how to configure the `LowercaseUrls` property of the `RouteOptions` class to generate lower-case URLs for all routes in our application.
By following these steps, you can ensure that the URLs in your [ASP.NET](http://ASP.NET) Core application are all in lower case, which can help with SEO and improve the user experience. | mbarkt3sto | |
1,134,179 | 394. Leetcode Solution in CPP | class Solution { public: string decodeString(string s) { stack<pair<string, int>>... | 0 | 2022-07-07T05:04:43 | https://dev.to/chiki1601/394-leetcode-solution-in-cpp-4mj2 | cpp | ```
class Solution {
public:
string decodeString(string s) {
stack<pair<string, int>> stack; // (prevStr, repeatCount)
string currStr;
int currNum = 0;
for (const char c : s)
if (isdigit(c)) {
currNum = currNum * 10 + (c - '0');
} else {
if (c == '[') {
stack.emplace(currStr, currNum);
currStr = "";
currNum = 0;
} else if (c == ']') {
const auto [prevStr, n] = stack.top();
stack.pop();
currStr = prevStr + getRepeatedStr(currStr, n);
} else {
currStr += c;
}
}
return currStr;
}
private:
// s * n times
string getRepeatedStr(const string& s, int n) {
string repeat;
while (n--)
repeat += s;
return repeat;
}
};
```
#leetcode
#challenge
here is the link for the problem:
https://leetcode.com/problems/decode-string/
| chiki1601 |
1,134,242 | Batch delete Docker images in a Azure Container Repository | This is mostly a note to self and i no credit whatsoever to myself. Info collected mainly from this... | 0 | 2022-07-07T07:35:11 | https://dev.to/olaj/batch-delete-docker-images-in-a-azure-container-repository-5429 | docker, azure | This is mostly a note to self and i no credit whatsoever to myself. Info collected mainly from this SO question, but i had some issues with the info there, mainly stuff caused by changes to Azure CLI and some code formatting issues.
https://stackoverflow.com/questions/41446962/how-to-delete-image-from-azure-container-registry
1. Login with Azure CLI, `az login`
2. Pick the correct subscription, `az account set -s your_id`
3. Then run this command.
```
az acr repository show-tags -n YourRegistry --repository YourRepository | ConvertFrom-String | %{$_.P2 -replace "[`",]",""} | where {$_ -notin "4698a2ae296ed953890c1d61bfaf370deedfb29f" } | %{az acr repository delete -n YourRegistry --image YourRepository:$_ --yes}
```
| olaj |
1,135,008 | Agora sim, o grande ganho do enum no Dart 2.17 | Liquid syntax error: Tag '{% https://medium.com/dartlang/dart-2-15-7e7a598e508a %}' was not properly... | 0 | 2022-07-08T00:14:45 | https://dev.to/kmartins/agora-sim-o-grande-ganho-do-enum-no-dart-217-2ije | dart, flutter, enum, news |
Como mencionado no [artigo anterior](https://kmartins.dev/enum-no-dart-215), não era possível declarar **membros** nos nossos `enums` e de certa forma vários `devs` ficavam chateados(as) e magoados(as) 😤
Mas para nossa alegria, isso finalmente mudou, fique tranquilo(a), você não leu errado, melhoria implementa e [issue](https://github.com/dart-lang/language/issues/158) fechada _(oiá até rimou 🤪)_.
Vai, bora dar aquele **refactor** em um código que está usando uma `extensions`!? Eu sei que você quer, então só vamos...
- Versão que causa certa tristeza ao olhar 🥲
```dart
enum Transport { car, truck, airplane, train, boat }
extension TransportExt on Transport {
int getSpeed() {
switch (this) {
case Transport.car:
return 65;
case Transport.truck:
return 55;
case Transport.airplane:
return 600;
case Transport.train:
return 70;
case Transport.boat:
return 22;
}
}
}
```
_Fique tranquilo, você ainda pode fazer assim_ 😮💨
- Versão que causa alegria ao olhar 😆
```dart
enum Transport {
car(65),
truck(55),
airplane(600),
train(70),
boat(22);
final int _speed;
const Transport(this._speed);
int getSpeed() => _speed;
}
```
Como algo "simples" pode deixar tanto(a) `dev` feliz não é mesmo 💁🏽♀️
Ei, calma que só melhora, há mais coisas que vão te dar uma "explosão" de felicidade 🥳
- Use com `generics`:
```dart
enum Transport<T extends num> {
car<int>(65),
truck<int>(55),
airplane<double>(600.50),
train<num>(70.25),
boat(22); // A inferência do tipo também funciona ;)
final T _speed;
const Transport(this._speed);
T getSpeed() => _speed;
}
```
- Adicione `mixins` e implemente `interfaces`:
```dart
mixin Speedometer {
// int speed = 0; Se usado no enum, deve ser final
int get maxSpeed;
}
abstract class Validation {
bool isValidSpeed(int speed);
}
enum Transport with Speedometer implements Validation {
...
const Transport(this.maxSpeed);
@override
final int maxSpeed;
@override
bool isValidSpeed(int speed) => speed <= maxSpeed;
}
```
- Faça `Factory constructors`:
```dart
enum Transport {
...
const Transport(this.speed);
factory Transport.faster() => Transport.airplane;
factory Transport.slower() => Transport.boat;
}
```
_Só não se esqueça, o `enum` continua sendo uma **constante**, você não pode **estender** dele, **sobrescrever** o `index`, `hashCode`, o operador `==` e nem se quer pensar em **declarar um membro** chamado `values`_ 😉
Sobre essa melhoria, deixo a seguinte frase do qual não sei o seu autor(a):
>_"Não existe começo perfeito, a evolução é um processo."_
**Reflitamos** 🤔
Se chegou até aqui, não fique acanhado(a), deixe aquele **like**, **compartilha** com os(as) `devs` e **me segue** nas redes sociais 🍻
_Esqueci ou errei algo? Não hesite e me avise_ 🤗
### Links úteis
{% https://medium.com/dartlang/dart-2-15-7e7a598e508a %} | kmartins |
1,135,409 | My Six Months Strategy Plan for Building Up A Community. | Communities are an integral part of any product’s journey in tech. They help you grow at... | 0 | 2022-07-08T12:38:59 | https://dev.to/shrutiiaroraaa/my-six-months-strategy-plan-for-building-a-community-1736 | productivity, community, blog, opensource | Communities are an integral part of any product’s journey in tech. They help you grow at lightning-fast speed while not burning out and having fun!
For me, the community is more like “commUNITY”.
So, here’s what my 6 Months plan looks like -
1st Month -> I call the first phase “Create, Set and Address”.
So, in this initial phase, we can create the branding and set a mission for our community and address it to the people to make them aware of our mission. It will be the time phase to start delivering the right actions and message. The focus at this stage should be on the objective of building a community that attracts everyone.
2nd Month -> I call this stage “Think Ahead ”.
It is important to think ahead and create a detailed process to factor in new developments to the existing plan. After creating a safe space we now have to focus on the growth of the community. We now have to create a connection with the audience through social media pages, mailing lists, and discord channels. Now it's high time to be consistent.

3rd Month -> I call this stage “Call for Volunteers”.
Here, in this phase, we can create roles and have people volunteer and participate. We can also create some interesting challenges and some interaction sessions on trendy topics in tech which fulfils the requirement of community members.

4th Month -> I call this stage “Finding Sponsors and Connect ”.
So, connecting with others is a way to feel motivated and safe. In a large group, you have someone to listen to. We can Invite our sponsors or representatives and connect with them. We can Connect online or at In-person meetups.

5th Month -> I call this stage “Future Thinking ”.
Now, that we have made the momentum and have promises, now we have to keep going. We have to focus on staying agile and adaptive in ever-changing tech practices. We have to prepare some strategies to deliver and develop an improved product. We also have to ensure that buyers and users feel satisfied with our product.
6th Month -> I call this stage “Get Ready and Repeat ”.
Now we have successfully built up the community and have seen growth. Now it's time to repeat the whole process again.
#Note-Once you get into it, there's no looking back.
🏁 The End
Let me know your feedback in the comment.
Thank you :)
| shrutiiaroraaa |
1,139,594 | 100 Days of Learning From My Mistakes – Day 2 | Why aren't my styles updating?! I've tried everything. Debugged my entire Sass project file by file.... | 0 | 2022-07-13T15:37:03 | https://dev.to/kondaguey/100-days-of-learning-from-my-mistakes-day-2-1ko5 | sass, css, cli, beginners | Why aren't my styles updating?! I've tried everything. Debugged my entire Sass project file by file. All imports correct. Everything peachy. What. The. Hell!
Did I forget CSS? Am I a terrible developer? Is this even for me? Why is the world against me?!
Oh. I forgot to run my Sass compiler...
```
npm run compile:sass
```
...deletes semi colon in any Sass file, hits cmd+s...the magnificent green console letters show up...main.scss compiled to app.css....
Dummy...
Note to self: Always make sure my Sass compiler is actually running before freaking out :)
-Dan
dndl.me | kondaguey |
1,148,869 | Luos integrations: survey | A few days ago, we created a poll to find out what integrations would be most useful for you who are... | 0 | 2022-07-22T15:44:51 | https://dev.to/luos/luos-integrations-survey-58f | opensource, microservices, embedded, luos | A few days ago, we created a poll to find out what integrations would be most useful for you who are using or discovering Luos.
Thanks for your opinion, it will help us to add your ideas in our roadmap.
As a result of this survey, we have added Zephyr, cloud providers (like AWS, Azure etc.), Zapier and Silicon Labs to our development roadmap. 🎉 | emanuel_allely |
1,151,674 | How to change React Native app name (iOS / Android) | In the example below I have a sample app that I named owler_franchise in the setup process. Now I... | 0 | 2022-07-26T10:03:00 | https://dev.to/nomanoff_tech/how-to-change-react-native-app-name-ios-android-5fnf | tutorial, mobile, reactnative | In the example below I have a sample app that I named **owler_franchise** in the setup process.

Now I want to change that to Owler Franchise.
- **iOS**
For **iOS**, go to **ios** folder and find the folder named after your project name.
In my case I called it **owler_franchise**. Inside that folder find the **Info.plist** file.

Inside **Info.plist** file replace the value after `<key>CFBundleDisplayName</key>` to your app name.

For my example, I changed it to **Owler Franchise**.

That's it for **iOS** 👍.

- **Android**
For **Android**: Go to the following folder in order: _android > app > src > main > res > values_ and open **strings.xml** file, replace the `<string name="app_name">` tag's value.


Again, in my example, I changed it to **Owler Franchise**:

That's it for **Android** too ✅.

Thanks for reading!
If you like this post, then follow me on twitter. You will be bombarded with coding memes 😁.
My twitter page 👉 [@nmnjnv](https://twitter.com/nmnjnv)
| nomanoff_tech |
1,180,422 | Simple Component ReactJS | A post by aboey | 0 | 2022-08-31T09:50:08 | https://dev.to/aboeywahab/simple-component-reactjs-39l8 | codepen | {% codepen https://codepen.io/aboeywahab/pen/MWVNMNb %} | aboeywahab |
1,180,488 | Top Flutter Benefits In Mobile And IoT Development. | We all know the power of Flutter when it comes to cross-platform mobile app development. But, Flutter... | 0 | 2022-08-31T12:28:43 | https://dev.to/bacancytechnology/top-flutter-benefits-in-mobile-and-iot-development-47l7 | flutter, mobileapp, iot, development | We all know the power of Flutter when it comes to cross-platform mobile app development. But, Flutter with IoT development is something interesting and more powerful. It already impressed the many developers around the world and day by day it's become more popular.
If you are about to do the IoT development, Flutter will be your smart choice. It is also beneficial from the business perspective with some outstanding Flutter business benefits.
Today, we are going to know what makes Flutter the most demanding framework to do IoT development.
## Key Benefits of Flutter for IoT App Development
### 1) Prototype Faster
For any kind of development, first building a prototype is very important. Businesses want to test prototypes first in the market to know what the possibilities of original product performance will be. Flutter will help you to develop the prototype faster compared to other technologies and also you can reduce the complexity of your IoT application and develop a prototype with Flutter. Instead of making the entire fully prototype.
Flutter offers IoT development very easy and almost with zero cost which is the main reason why everyone around the world is impressed with Flutter to do IoT development.
### 2) High Performance
In an IoT application, you need to deal with plenty of data. That's why you must have a heavy prototype to handle all the amount of data. The prototype you develop for IoT must be capable of handling the data streaming, data manipulation, loading visualization.
However, IoT demands the highly potential prototype to handle the project without errors and crashes. And to develop such projects, Flutter just needs an hour.
If you want to develop a high-performance app, Contact the [Mobile App development company](https://www.bacancytechnology.com/mobile-app-development) to do the development task.
### 3) Multi-platform Support
Flutter is a very ideal choice for cross-platform application development. But it is not limited up to there, Flutter is also ideal for desktop and web applications development. Also the feature of the single codebase of Flutter helped to develop the IoT app for both android and iOS. Know matter what kind of development work you want to do, Flutter is always a go to choice.
### 4) Reduce Overall Development Cost
All we need is a high-performing IoT app for business but not with higher cost than our budget. If you want to build your IoT application for both iOS and Android, it will definitely cost you more. You need to hire different developers and do the separate development tasks. Here, Flutter is a blessing for developers. It saves the development time and cost as well.
## Final Thoughts
At a moment, Flutter is on trending and one of the quick solutions to develop the cross-platform application for Android and iOS platforms by using the single code in high-performing manner. So, hope now you feel that Flutter is the ideal choice for IoT app development.
Don’t worry, we have a team of expert Flutter app developers, that will help you to develop the IoT app development with Flutter. [Hire Flutter developer](https://www.bacancytechnology.com/hire-flutter-developer) for us and start working on your IoT app now.
| bacancytech |
1,181,082 | Refactoring the Game | Our game is almost finished (at least part of it). But we can improve the module game and do it... | 19,513 | 2022-09-01T05:15:49 | https://dev.to/dnovais/refactoring-the-game-j7a | elixir, algorithms, webdev, tutorial | ---
series: Rock, Paper, and Scissors with Elixir
---
Our **game** is almost finished (_at least part of it_). But we can improve the module game and do it together.
**Let's start...**
I found modular arithmetic, looking for something with a mathematical approach to solving the logic of our game (if you are interested in the subject, take a look [here](https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/what-is-modular-arithmetic)).
Now we'll use modular arithmetic to add a few math to our code. And make the code brighter and cleaner.
## The mathematical approach
The mod function provides the remainder when one integer is divided by another. And it will help us to a cyclical relationship between the three choices: Rock, Paper, and Scissors.
```Elixir
r = a mod b
r is the remainder when a is divided by b
```
So, looking into our code, more specifically on module attributes:
```Elixir
@stone 1
@paper 2
@scissor 3
```
I saw a tip that can help us to make calculus more efficiently:
```Elixir
(first_player_choice - second_player_choice) % 3
```
## Refactoring the Game
Adding the function to calculate the result of the game `game_calc`:
```Elixir
defmodule Game do
@moduledoc """
Documentation for `Game`.
"""
@stone 1
@paper 2
@scissor 3
def play(first_player_choice, second_player_choice) do
result(first_player_choice, second_player_choice)
end
defp result(first_player_choice, second_player_choice) do
cond do
first_player_choice == second_player_choice ->
{:ok, "Draw!"}
first_player_choice == @scissor && second_player_choice == @paper ->
{:ok, "First player win!!!"}
first_player_choice == @paper && second_player_choice == @stone ->
{:ok, "First player win!!!"}
first_player_choice == @stone && second_player_choice == @scissor ->
{:ok, "First player win!!!"}
first_player_choice == @paper && second_player_choice == @scissor ->
{:ok, "Second player win!!!"}
first_player_choice == @stone && second_player_choice == @paper ->
{:ok, "Second player win!!!"}
first_player_choice == @scissor && second_player_choice == @stone ->
{:ok, "Second player win!!!"}
end
end
defp game_calc(first_player_item, second_player_item) do
rem(first_player_item - second_player_item, 3)
end
end
```
And then now, we can simplify the function result:
```Elixir
defmodule Game do
@moduledoc """
Documentation for `Game`.
"""
@stone 1
@paper 2
@scissor 3
def play(first_player_choice, second_player_choice) do
result(first_player_choice, second_player_choice)
end
defp result(first_player_choice, second_player_choice) do
game_calc_result = game_calc(first_player_choice, second_player_choice)
case game_calc_result do
0 -> {:ok, "Draw!"}
1 -> {:ok, "First player win!!!"}
_ -> {:ok, "Second player win!!!"}
end
end
defp game_calc(first_player_item, second_player_item) do
rem(first_player_item - second_player_item, 3)
end
end
```
### Running the tests:
```Bash
mix test
```
Something is wrong. We got three warnings and one failure message when we ran the tests.
```Bash
Compiling 1 file (.ex)
warning: module attribute @scissor was set but never used
lib/game.ex:8
warning: module attribute @paper was set but never used
lib/game.ex:7
warning: module attribute @stone was set but never used
lib/game.ex:6
..
1) test Game.play/2 when first player wins when first player chooses stone and second player chooses scissors (GameTest)
test/game_test.exs:55
Assertion with == failed
code: assert match == "First player win!!!"
left: "Second player win!!!"
right: "First player win!!!"
stacktrace:
test/game_test.exs:61: (test)
......
Finished in 0.05 seconds (0.00s async, 0.05s sync)
9 tests, 1 failure
Randomized with seed 811857
```
To solve the warning messages, we need to remove the module attributes:
```Elixir
defmodule Game do
@moduledoc """
Documentation for `Game`.
"""
def play(first_player_choice, second_player_choice) do
result(first_player_choice, second_player_choice)
end
defp result(first_player_choice, second_player_choice) do
game_calc_result = game_calc(first_player_choice, second_player_choice)
case game_calc_result do
0 -> {:ok, "Draw!"}
1 -> {:ok, "First player win!!!"}
_ -> {:ok, "Second player win!!!"}
end
end
defp game_calc(first_player_item, second_player_item) do
rem(first_player_item - second_player_item, 3)
end
end
```
And now, if we rerun the tests:
```Bash
mix test
```
We'll see only the tests failure:
```Bash
Compiling 1 file (.ex)
....
1) test Game.play/2 when first player wins when first player chooses stone and second player chooses scissors (GameTest)
test/game_test.exs:55
Assertion with == failed
code: assert match == "First player win!!!"
left: "Second player win!!!"
right: "First player win!!!"
stacktrace:
test/game_test.exs:61: (test)
....
Finished in 0.04 seconds (0.00s async, 0.04s sync)
9 tests, 1 failure
Randomized with seed 730068
```
### Understanding the failure message
The failure is because we pass to kernel function rem/2 a dividend negative in our formula. And according to the documentation, this kernel function uses truncated division, which means that the result will always have the sign of the dividend.
When first player wins when first player chooses stone and second player chooses scissors, the result is `-2`:
```Elixir
# stone = 1
# paper = 2
# scissor = 3
# R = (first_player_choice - second_player_choice) % 3
# R = (stone - scissors) % 3
# R = (1 - 3) % 3
# In elixir using rem/2
rem(1-3, 3)
> -2
```
### Solving the failure message
According to the documentation, the function `Integer.mod/2` Computes the modulo remainder of an integer division.
**It's important to know:** `Integer.mod/2` uses floored division, which means that the result will always have the sign of the divisor.
So, when first player wins when first player chooses stone and second player chooses scissors, the result is `1`:
```Elixir
# stone = 1
# paper = 2
# scissor = 3
# R = (first_player_choice - second_player_choice) % 3
# R = (stone - scissors) % 3
# R = (1 - 3) % 3
# In elixir using rem/2
Integer.mod(1-3, 3)
> 1
```
So, To solve the failure message, we need to remove the rem/2 function and add the Integer.mod/2:
```Elixir
defmodule Game do
@moduledoc """
Documentation for `Game`.
"""
def play(first_player_choice, second_player_choice) do
result(first_player_choice, second_player_choice)
end
defp result(first_player_choice, second_player_choice) do
game_calc_result = game_calc(first_player_choice, second_player_choice)
case game_calc_result do
0 -> {:ok, "Draw!"}
1 -> {:ok, "First player win!!!"}
_ -> {:ok, "Second player win!!!"}
end
end
defp game_calc(first_player_item, second_player_item) do
Integer.mod(first_player_item - second_player_item, 3)
end
end
```
And now, finally, if we rerun the tests:
```Bash
mix test
```
All the tests pass with success \o/:
```Bash
Compiling 1 file (.ex)
.........
Finished in 0.04 seconds (0.00s async, 0.04s sync)
9 tests, 0 failures
Randomized with seed 992719
```
It's **time to celebrate**, the game **Rock, Paper, and Scissors** is **"done"**!
**Repository of the project:** https://github.com/dnovais/rock_paper_scissor_elixir
See you soon!
### Contacts
Email: contato@diegonovais.com.br
Linkedin: https://www.linkedin.com/in/diegonovais/
Twitter: https://twitter.com/diegonovaistech
---
#### Sources and references
- https://hexdocs.pm/elixir/1.12/Integer.html#mod/2
- https://www.cs.drexel.edu/~jpopyack/Courses/CSP/Fa18/notes/CS150_RockPaperScissors_Revisited.pdf
- https://www.cin.ufpe.br/~gdcc/matdis/aulas/aritmeticaModular.pdf
- https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/what-is-modular-arithmetic
| dnovais |
1,181,374 | Design Patterns 2 | Types | Before diving into the types of design patterns, we need to brush up our memories on what design... | 0 | 2022-09-05T07:29:33 | https://dev.to/abdulhameedanofi/design-patterns-2-types-1jgk | Before diving into the types of design patterns, we need to brush up our memories on what design patterns are. Design patterns are templates for solving recurring problems in our software designs.
Now that we know what design patters are, understanding the types of design patterns will make it easy to know the right one to use in solving your problem.
There are several types of design patterns which are categorized into three. They are:
-Creational,
-Structural, and
-Behavioral.
## Creational
These patterns are all about class instantiation and object creation. They can further be categorized as Class-creational patterns and Object-creational patterns.
Creational design patterns are the
- Factory method
- Abstract Factory
- Builder method
- Singleton
- Object pool
- Prototype.
## Structural
These patterns are efficient for the structure of classes and objects to increase their functionality.
They are the
- Adapter
- Bridge
- Composite
- Decorator
- Facade
- Flyweight
- Private Class Data
- Proxy.
## Behavioral
These patterns are formed based on how classes communicate with each other.
These patterns are the
- Chain of Responsibility
- Command
- Interpreter
- Iterator
- Mediator
- Mement
- Null Object
- Observer
- State
- Strategy
- Template Method
- Visitor Method.
Now that we know what the types of design patterns are, lets know more about its pros and cons in the [next article](https://dev.to/abdulhameedanofi/design-patterns-3-appendix-4e38). | abdulhameedanofi | |
1,181,688 | React Life Cycle | Every React web app comprises of components and these components go through some life cycle methods.... | 0 | 2022-09-01T18:09:53 | https://dev.to/abhinav707/react-life-cycle-4pj1 | webdev, react, javascript | Every React web app comprises of components and these components go through some life cycle methods.
[](https://www.cuelogic.com/blog/reactjs-lifecycle#:~:text=Initialisation,constructor%20of%20a%20component%20class.)
These are:
* Initialization: First stage in React's components Life cycle. In this stage the default states and properties are intilaized.
* Mounting: Mounting is when an instance of a component is being created and inserted into the DOM.
* Updating: Updating is the stage when the state of a component is updated and component is re-rendered.
* Unmounting: As the name suggests Unmounting is the final step of the component lifecycle where the component is removed from the page.
### Initialization
```
constructor() {
super()
this.state = {
show: false
}
console.log("Constructor ran, State has been intialized");
}
```

### Mounting
React has four built-in methods that gets called, in this order, when mounting a component:
- constructor()
- getDerivedStateFromProps()
- render()
- componentDidMount()
**Render**
The render() method is the only required method in a class component.
```
render() {
console.log("render ran");
return (
<div>
<h1>Show and Hide Counter</h1>
<p>{this.state.text}</p>
<button onClick={this.click}>{this.state.show ? "Hide counter" : "Show Counter"}</button>
{this.state.show && <Counter/>}
</div>
)
}
```
The render() function should be pure, meaning that it does not modify component state, it returns the same result each time it’s invoked, and it does not directly interact with the browser.
**componentDidMount() Function**
It runs immediately after the component is inserted into the tree(mounting).This is where we run statements that requires that the component is already placed in the DOM.
```
componentDidMount() {
setTimeout(() => {
this.setState({text: "Text was mounted first and then it was changed after 5 sec"})
}, 5000)
console.log("component was mounted");
}
```

After 5 seconds

Mounting the Counter component:
When the show counter button is clicked render runs again because we have changed the state.
```
click = () => {
if (this.state.show === false) {
this.setState({
show: true
})
}
else{
this.setState({
show:false
})
}
}
```
After that our counter component is rendered and mounted.

### Updating
A component is updated whenever there is a change in the component's state or props.
* componentDidUpdate
The componentDidUpdate method is called after the component is updated in the DOM.

Here when we incremented the the component is render and updated .
* shouldComponentUpdate
In the shouldComponentUpdate() method you can return a Boolean value that specifies whether React should continue with the rendering or not.
The default value is true.

Even though we are clicking the increment and decrement button the component is not updating because our shouldComponentUpdate() method returns flase.
` shouldComponentUpdate() {
return false;
}`
---
App.js
```
import React, { Component } from 'react'
import Counter from './counter'
export class App extends Component {
constructor() {
super()
this.state = {
show: false,
text:"See how the text will change"
}
console.log("Constructor ran, State has been intialized");
}
// componentDidMount() {
// setTimeout(() => {
// this.setState({text: "Text was mounted first and then it was changed after 5 sec"})
// }, 5000)
// console.log("component was mounted");
// }
click = () => {
if (this.state.show === false) {
this.setState({
show: true
})
}
else{
this.setState({
show:false
})
}
}
render() {
console.log("render ran");
return (
<div>
<h1>Show and Hide Counter</h1>
<p>{this.state.text}</p>
<button onClick={this.click}>{this.state.show ? "Hide counter" : "Show Counter"}</button>
{this.state.show && <Counter/>}
</div>
)
}
}
export default App
```
---
counter.js
```
import React, { Component } from 'react';
class Counter extends Component {
constructor(){
super()
this.state={
counter:0
}
}
componentDidMount(){
console.log("mounted the counter");
}
increment=()=>{
this.setState(prevVal=>({
counter:prevVal.counter-1
}))
console.log("button was clicked");
}
decrement=()=>{
this.setState(prevVal=>({
counter:prevVal.counter+1
}))
console.log("button was clicked");
}
componentWillUnmount(){
console.log("unmounted");
}
shouldComponentUpdate() {
return false;
}
componentDidUpdate(){
console.log("component was updated");
}
render() {
console.log("render ran for counter");
return (
<div>
<button onClick={this.decrement}>+</button>
<h6>{this.state.counter}</h6>
<button onClick={this.increment}>-</button>
</div>
);
}
}
export default Counter;
```
### References
* https://reactjs.org/docs/reactcomponent.html#componentdidmount
* https://www.w3schools.com/react/react_lifecycle.asp
* https://www.cuelogic.com/blog/reactjs-lifecycle
| abhinav707 |
1,181,829 | My Experience as a Mentor | Spanish Version "The circle is now complete. When I left you, I was but the learner. Now, I am the... | 0 | 2022-09-07T19:04:34 | https://dev.to/smmd/my-experience-as-a-mentor-4min | grow, programming, 100daysofcode, community | [Spanish Version](https://blog.thedojo.mx/2022/09/19/mi-experiencia-como-mentora.html)
"The circle is now complete. When I left you, I was but the learner. Now, I am the master." - Darth Vader.
Avoid being a master of evil; in the last months, I had the opportunity of being the mentor of two incredible women.
The trip started with my participation in a training program in the company I work for; the company invited people with seniority experience to be mentors of junior and mid-levels. I created my profile following the recommendations of the program team. In the beginning, the truth is I was putting some limitations on myself, such as:
- "I don't know what to share"
- "I'm not sure if my experience is useful for anybody."
- "Maybe, I don't have a mentor profile."
- "What can I teach them that they don't already know?"
Luckily, I got my match with an excellent QA Engineer. She helped me break those limits because we created a solid mentee-mentor bond from the beginning. Sharing with her mentoring sessions, listening to her situations, trying to share recommendations, or sharing my experience was an excellent opportunity for me.
Almost at the same time, a member from a women's tech community where I have participated contacted me and asked me to be a mentor of another woman with a different profile, this time most similar to mine (backend engineer), and again, I was like
- "Oh... I don't know if I can do it."
But thank God I accepted, and luckily one more time, we also built a robust mentee-mentor bond, and it was a pretty cool feeling.
I was supporting and sharing stuff with two fantastic engineers. Today I do not regret anything; on the contrary, my two new friends gave me very positive feedback about my performance as a mentor. Also, another great win, during this time, one of them decided to take a new job offer, and I could enjoy the process with her.

 _(Images source: Hercules 1997 [Disney](https://www.disney.com/) film)_
My first mentees are women, and I'm proud because I'm still working on engineering teams where there is a 10% of female presence. However, I do not limit mentoring by gender; I constantly try to share my knowledge by giving talks, writing articles or offering my help. I think the first step to collaborating with the community is definitely "start" do not let yourself stop you from sharing your **knowledge**. Trust me; you have something to share.
### Here are some things that I learned and I can list from this experience:
1. We all have insecurities; help mentees to identify and manage theirs.
2. When our mentee shares something, we must be active listeners and ask for details. That means building a safe environment.
3. Help our mentees to be conscious of their capabilities; they must be able to recognize and compare how far they have come.
4. Usually, people do not need knowledge; they already have it or know where to obtain it. What they need is **inspiration**.
5. Being open and sharing vulnerable moments in life can help others to identify and pass their difficult moments faster.
6. Helping them to be visible in what they do on their team or job. Visibility can boost a career very quickly.
7. An informal talk can be more valuable than a formal one. However, do not lose the fact that in professional mentoring careers, the relationship, as the name says, is professional, and engineers are not psychologists. If you consider it proper, you can redirect to a specialist.
8. Share experience related to interview processes with your mentee; what you have already thought can speed up the mentee process.
9. Do not forget to provide and ask for feedback constantly. Everyone is continually growing as professionals, do not be blocked from that, and listen/share the areas of opportunity.
10. Share sources, books, posts, podcasts, etc., that can be helpful for your mentee, but remember it is not homework; if they have the chance, they can use those resources.
There might be other things that I could list here. But, I preferred to limit it to these ten because, in the end, we are different people, and there is no zero to a hero for being a **good mentor**. I can only tell you that if you want to try, take the risk and be open to constantly improving as a professional.
### What was my professional growth during this experience?
I identify the following.
- Confidence increased
- Had the chance to go over the knowledge
- Practiced the interview process
- Improvements in communication
- Learned from other areas
- Increased networking
- Experienced good feelings (fun, happiness, I felt proud)
- Increased discipline
Last but not least. I want to thank my mentees for everything you taught me during this time; today, thanks to you, I am a better Senior Engineer than yesterday. | smmd |
1,181,964 | First Post! | So I have decided to keep on track of my developer journal. I am going to try to post here daily with... | 0 | 2022-09-02T05:30:10 | https://dev.to/adamsteradam/first-post-4km3 | So I have decided to keep on track of my developer journal.
I am going to try to post here daily with what I have been doing, and what I have learned etc.
cheers.
| adamsteradam | |
1,182,052 | Install these 9 apps to 10x productivity 👇 | Do you have a smartphone without these apps? THROW IT!! Because, what’s the point if your smartphone... | 0 | 2022-09-02T08:04:52 | https://dev.to/areedev/install-these-9-apps-to-10x-productivity-2p1h | productivity, career, programming, beginners | **Do you have a smartphone without these apps? THROW IT!!**
**Because, what’s the point if your smartphone doesn’t make you smarter?**
.
.
.
.
.
## Install these 9 apps to 10x productivity 👇
**1. Readwise**
Readwise allows you to organize and review your e-book and article highlights.
Spaced Repetition's scientific process resurfaces your best highlights back to you at the right time with the help of a daily email.
This way, you'll be able to review and remember critical highlights from the books you've read.
----------------------------------
**2. Routinery**
Routinery is a popular app that aims to help you build and maintain habits (such as writing a journal, taking medication on time, or meditating) a part of your everyday schedule.
The app is difficult to ignore, and that's the point: the reminders annoy you into complying with the task.
-----------------------------------
**3. Tide**
Tide is the best Pomodoro app that aims at physical and mental care by integrating an app with sleep, meditation, relaxation, and focus.
Inspired by traveling, nature, and meditation, it provides massive audios, including natural soundscapes and mindfulness practices.
-----------------------------------
**4. Anki**
Anyone who needs to remember things daily can benefit from Anki. Since it is content-agnostic and supports images, audio, videos, and scientific markup the possibilities are endless.
It's a lot more efficient than traditional study methods, you can either significantly decrease your time spent studying or greatly increase the amount you learn.
-------------------------------
**5. Zoho Vault**
Zoho Vault is a password manager app that generates strong passwords for your accounts and safely remembers them.
Vault keeps your passwords safe and auto-fills them across your favorite websites and mobile apps.
--------------------------------------
**6. Pocket**
If you come across an intriguing story, you do not have to read it right away. Instead, save it to Pocket and come back to it when you have more free time.
After you've saved a few stories in Pocket, the app may begin to propose related articles that you might be interested in reading.
------------------------------------
**7. Scrivener**
It helps you organize long writing projects such as novels, nonfiction books, academic papers, and even scripts.
Scrivener has many tools to help you edit more efficiently, correcting simple errors or restructuring entire sections and chapters.
--------------------------------
**8. Two Bird**
Twobird helps you focus on just the conversations, tasks, notes, and events that are important now, and clear out the things that can wait.
Twobird will filter your emails on a priority basis keeping everything under control and in context, for an easier day.
-----------------------------
**9. LockScreen Calendar**
This app will replace your regular lock screen with your calendar.
The moment you hit the power button on your phone, this calendar will be waiting for you showing the tasks which need your attention.
> Comment any app you use for a productive day.
| areedev |
1,182,146 | 0xDC is live! 🎉 | https://0xdc.me | 0 | 2022-09-02T11:16:48 | https://dev.to/0xdc/0xdc-is-live-361a | blog, devops, opensource, cloud | ERROR: type should be string, got "\nhttps://0xdc.me" | 0xdc |
1,182,286 | James Altucher? | Curious whether anyone out there follows James Altucher? I have been researching how to hone the... | 19,654 | 2022-09-02T15:03:35 | https://dev.to/echristian74/james-altucher-2iac | jamesaltucher, ideas, trainyourbrain | Curious whether anyone out there follows James Altucher? I have been researching how to hone the ability to develop new ideas and James is rocking my world. [James Altucher Blog](https://jamesaltucher.com/blog/) | echristian74 |
1,182,928 | Commenting == Account Takeover | Hi y'all, how are you doing? Hope you are doing great 🤗 It's been quite a long time since my last... | 0 | 2022-09-03T23:08:20 | https://dev.to/therealbrenu/commenting-account-takeover-c54 | hacking, security, bugbounty, lowcode | Hi y'all, how are you doing? Hope you are doing great 🤗
It's been quite a long time since my last released blog, mostly because I didn't like the themes I've been coming up with. But today, I'm here to share the process that recently got me to find [CVE-2022-3019](https://huntr.dev/bounties/a610300b-ce3c-4995-8337-11942b3621bf/), a bug that isn't complex at all, but is definitely something cool to spot as a hunter.
## 🧐 Doing Some Code Review
The whole story started by reading huntr's hacktivity page. I've never thought about testing Low/No Code App Builders before, and it just came to my mind after reading an interesting report there. So I decided to look for bugs in ToolJet, starting with a specific one: IDORs.
In order to do so, whenever I have access to the source code of a web app, I assume that every routine handling requests like this may be a good IDOR candidate:
```php
function showObject(request)
object = Entity.findOrFail(request.params.id)
return object
```
Why is it an IDOR candidate? Well, because apparently the only check that's being done by this routine is whether the object exists or not, there is absolutely no authorization check. It's not always that simple, because not every application handles authorization within the context of a service's function, but most web apps do it like this.
With this perfect scenario in mind, I went to the code of ToolJet's controllers, and ended up finding a very similar case. The `getComment()` method in the Comments Controller had only an authentication middleware, and no checks after that. So after creating an account, theoretically, I could read arbitrary comments even if they were not from the same tenant as my account.
`Cool, Breno. And this is the part when you report the super duper critical IDOR you just found, right? 🥳`

Definitely not. Because reading arbitrary comments, in my honest opinion, doesn't have any impact. Usually, when we are building stuff cooperatively, the kind of comments that we make doesn't contain sensitive information, and when it does, it's not in a way that can make sense to possible attackers.
## :eyes: Looking for Impact
So reading everybody's comments, by itself, is not a huge of an issue. But what else can we do with it?
I ran an instance of ToolJet, created an account, then I created a comment and looked at the request that had this IDOR, in order to see what data was being retrieved to the browser. Surprisingly, this is what I received:

It was not only an IDOR, but also a case of Excessive Data Exposure! We got email and password hash being disclosed, and the worst part: we got the password reset token also being leaked! :scream:
At that moment, this token attribute had a null value, but this got me thinking: `what if I, as another user, pick up the email value and use it in the "forgot password?" page? Maybe it'll generate a token that I can see with this comment endpoint`. And that's what I tried.
I created a new account, accessed the comment, got the email of the first account from there, went to the "forgot password" page and pasted that email, then I accessed the comment again and boom! The generated password reset token was there! :partying_face:
From this point, it was just a matter of accessing the url of password reset with the generated token, and then I was able to change the password of the first user, and impersonate it :detective:
## :bulb: Final Thoughts
I really like this finding, and I probably would never have found it, if it wasn't for public reports showing me new kinds of apps we can test. Some bug bounty platforms have these amazing hacktivity pages, but we can also discover these cool things from youtube videos, blogs, articles, etc. So...always try to absorb content that is publicly available, it tend to be worth it :hugs:
Also, the whole test started with a simple code review, and that's something that I usually try to do when looking for bugs in open source software. If it's something that interests you, I have this series on [OWASP API Top 10](https://dev.to/therealbrenu/series/17377), with some cool stuff that you can look for when hunting for bugs in APIs. Just a few topics include pseudocode examples (see [API3:2019](https://dev.to/therealbrenu/api32019-excessive-data-exposure-4c4p) and [API6:2019](https://dev.to/therealbrenu/api62019-mass-assignment-3b76)), but there's also some other stuff that you can test when in a live and running application!
## :hugs: Thanks for Taking Your Time to Read It!
| therealbrenu |
1,183,025 | Must have Custom hooks for NextJs | useLocalStorage the custom hook strapped this together to help save and retrieve... | 0 | 2022-09-03T08:15:51 | https://dev.to/tigawanna/must-have-custom-hooks-for-nextjs-3b5k | nextjs, typescript, localstorage, darkmode | # useLocalStorage the custom hook
### strapped this together to help save and retrieve the `user-theme` and `token` from the localstorage
was working on [a react vite app ](https://github.com/tigawanna/gitpals) and had to port it over to Next Js [the ported next js app](https://github.com/tigawanna/gitdeck) and i very quickly discovered that localStorage worked a little differently n Nextjs and i need it because that's where i was saving my personal access token and theme preferences which lead me down a rabbit hole which resulted in the following custom hook for that
```ts
import { useState,useEffect,useReducer } from 'react';
import { Viewer } from './../types/usertypes';
interface State{
token:string|null
theme:string|null
mainuser?:Viewer
error?:string
}
export const useLocalStorge=()=>{
const [loading, setLoading] = useState(true);
const [ state,dispatch] = useReducer(generalReducer,undefined);
useEffect(() => {
const gen = window.localStorage.general;
if(gen){
dispatch({ type: "INIT", payload: JSON.parse(gen) });
}
setLoading(false)
}, [])
useEffect(() => {
const colorTheme = state?.theme === "dark" ? "light" : "dark";
const root = window.document.documentElement;
// console.log("colorTheme ==== ", colorTheme);
root.classList.remove(colorTheme);
// console.log("theme reset to ==== ",state?.theme)
if(state?.theme){
root.classList.add(state?.theme);
}
}, [state?.theme]);
useEffect(() => {
if(state)
window.localStorage.setItem("general",JSON.stringify(state))
}, [state])
return {loading ,state ,dispatch}
}
function generalReducer(state:State, action:any) {
switch (action.type) {
case "INIT":
return action.payload;
case "THEME":
return {...state,theme:action.payload}
case "TOKEN":
return {...state,token:action.payload}
case "ERROR":
return {...state,error:action.payload}
default:
return state;
}
}
```
and you consume it like
```ts
import "../styles/globals.css";
import type { AppProps } from "next/app";
import { ReactQueryDevtools } from "react-query/devtools";
import { QueryClient, QueryClientProvider } from "react-query";
import { Layout } from "../components/Layout";
import { useLocalStorge } from "./../utils/hooks/useLocalStorge";
import GlobalContext from "../utils/context/GlobalsContext";
const queryClient = new QueryClient({
defaultOptions: {
queries: {
refetchOnWindowFocus: false,
refetchOnMount: false,
refetchOnReconnect: false,
retry: false,
staleTime: 5 * 60 * 1000,
},
},
});
function MyApp({ Component, pageProps }: AppProps) {
const local = useLocalStorge();
// console.log("local state ==== ",local?.state)
// console.log("initial value in local storage ==== ", value);
return (
<QueryClientProvider client={queryClient}>
<GlobalContext.Provider
value={{ value: local?.state, updateValue: local?.dispatch }}
>
<Layout local={local}>
<Component {...pageProps} />
<ReactQueryDevtools />
</Layout>
</GlobalContext.Provider>
</QueryClientProvider>
);
}
export default MyApp;
```
the cotext is optonnal but it looks like this
```ts
import React, { Dispatch } from "react";
import { Viewer } from './../types/usertypes';
export interface Value {
token: string | null;
theme:string
error?: string;
mainuser?:Viewer
}
interface Type {
value: Value;
updateValue:Dispatch<any>
}
const init_data: Type = {
value:{token:null,theme:"light"},
updateValue: (any) => {},
};
const GlobalContext = React.createContext(init_data);
export default GlobalContext;
```
[final project looks like ](https://gitdeck-two.vercel.app/)
| tigawanna |
1,183,032 | Custom react icon context wrapper | React icons are cool and all but tey do have that pesky issue of hvving to wrap them in an in order... | 0 | 2022-09-03T08:33:25 | https://dev.to/tigawanna/custom-react-icon-context-wrapper-58b | reactic, react, context, tailwindcss | React icons are cool and all but tey do have that pesky issue of hvving to wrap them in an <IconContext.provider> in order to be able to resize them or change their colors so i made this wrapper
```ts
import React from 'react'
import { IconContext, IconType } from "react-icons";
type MyProps = {
// using `interface` is also ok
Icon: IconType;
size: string;
color: string;
iconstyle?:string;
iconAction?: () => any;
};
type MyState = {
iconstyle: string;
};
export class TheIcon extends React.Component<MyProps, MyState> {
constructor(props:MyProps) {
super(props)
this.state = { iconstyle:this.props?.iconstyle?this.props?.iconstyle:"" };
this.clickAction = this.clickAction.bind(this);
}
clickAction(){
if(this.props.iconAction){
console.log("click action")
return this.props.iconAction()
}
return console.log("")
}
render() {
return (
<div>
<IconContext.Provider value={{ size:this.props.size,color:this.props.color,
className:this.state.iconstyle}}>
<this.props.Icon onClick={()=>this.clickAction()}/>
</IconContext.Provider>
</div>
);
}
}
```
example usage
```
import { FaTimes} from "react-icons/fa";
import { TheIcon } from './../Shared/TheIcon';
<TheIcon Icon={ FaTimes } size={"34"} color={"green"} />
```
[full project](https://github.com/tigawanna/gitdeck)
[live demo](https://gitdeck-two.vercel.app/) | tigawanna |
1,183,610 | ReactJS Installation & Setup Tutorial | React is currently one of the most popular JavaScript library for building UIs —and that trend looks... | 19,680 | 2022-09-06T18:56:09 | https://dev.to/rembertdesigns/reactjs-installation-setup-tutorial-449l | programming, productivity, tutorial, react | React is currently one of the most popular JavaScript library for building UIs —and that trend looks set to continue for the foreseeable future. In this article, we’re going to focus on the ways we can setup React, quickly and painlessly so we can dive right into coding!
## Why React?
For the uninitiated, React allows us to build extremely fast web apps through the use of the Virtual DOM — it essentially renders only what it needs to. Providing a lightweight alternative to the traditional way of working directly with the [DOM](https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model/Introduction).
React also promotes a component based workflow, meaning your UI is essentially just a collection of components. This makes for a fantastic building experience! As you’ll build with modularity, your code will be in neat self-contained chunks. And it’s also very useful when working in teams, individuals can work on parts of a project, while still working collectively toward project completion.
## Installation & Setup
There are a number of ways to get up and running with React. Firstly, we’ll take a peek at CodeSandbox and CodePen. If you want to instantly start playing around with code, this is a nice way to go!
We’ll then focus on spinning up a React project with Create React App — which gives you an awesome starting point for your projects, without the need to spend time setting up a build environment.
## React in CodeSandbox
CodeSandBox is an online code editor which you can use to get a React project up and running in no time at all.
Go to [codesandbox](https://codesandbox.io/s/) and click **React**.

Instantly, you’ll be in a React environment that has been configured with the _create-react-app_ structure. We’ll look at this structure further on in the article! If you want to start coding without setting up a local install, this is a great way to go! Tasks such as transpiling, bundling and dependency management are all automated, and you can easily share the link of anything you’ve been working on!
## React in CodePen
An alternative to CodeSandBox is [CodePen](https://codepen.io/). Many developers use CodePen to showcase their work by creating “pens” for quick code demos, or “projects” for when multiple files are involved. CodeSandbox is definitely more feature rich for working with React, however CodePen is also a fine solution.
I’ve created a CodePen React starter here:
{% codepen https://codepen.io/rembertdesigns/pen/VwxLoNQ %}
## Create React App
_Create React App_ is a tool (built by developers at Facebook) that’ll give you a massive head start when building React apps. It handles all of the configuration, so you don’t need to know any Babel or Webpack. You just run the initial install & you’ll be up in a local dev environment, in no time!
### Installing with Create React App
All we need to do is open up our terminal, and run the following:
```react
npx create-react-app <app-name>
```
Where `<app-name>` is of course, the name of your app!
I use `npx` as it will download and run Node.js commands without installing them. If you don’t have Node installed, you can [download it here](https://nodejs.org/en/download/).
So go ahead and run the above command to begin the install!
The install might take a few minutes to complete. Right now it’s installing all of the dependencies required to build your project and it’s also generating your initial project structure.
Success! Now you can open up your project folder & check out the created file structure.
Additionally, a Git repository has been created for you. And several commands have been added into the `package.json` file:
- `npm` start starts the development server, including auto page reloads for when you make edits
- `npm` run build bundles the app into static files for production into a build folder
- `npm` test starts the test runner using Jest
- `npm` run eject ejects your app out of the create-react-app setup, which lets you customize your project configuration
### Starting your Create React App
Now lets start up our local server! From your app folder, run:
```react
npm start
```
Your app will launch in the browser on `localhost:3000`.
Now you’re done! Each time you start a new project with _create-react-app_, you’ll have the latest version of React, React-DOM & React-Scripts. Lets now take a brief look at some of the features of _create-react-app_.
## Create-React-App Features
As you’ll see in the generated `package.json`, there are a number of commands which are available for use in your apps — lets take a look at these now.
### Building for Production
When the times comes to move from development to production, you can do so by running `npm run build`. A `build` folder will generate containing all of the static files to be used on a production server.
The `build` command itself will transpile your React code into code that the browser understands (using Babel). It’ll also optimize your files for best performance, by bundling all of your JavaScript files into one single file, which will be minified to reduce load times.
### Testing your App
Included with _create-react-app_ is [JEST](https://jestjs.io/), which allows you to test your code by running `npm test`. It’ll launch in a similar manner to `npm start` in that it will re-run your tests, each time you make changes.
If you haven’t yet run unit tests, you can safely file this away as a ‘nice to know’ for now. For those interested in testing components with Jest, all you need to do is either suffix your required files with `.spec.js` or `.test.js`, or place your test files inside of a `__tests__` folder. Jest will run your tests for the files you specify.
### Ejecting
Whenever you create an app with _create-react-app_, your build settings are not able to be altered, as they’ve been preconfigured in react-scripts. However, by running `npm run eject`, you can gain full control of your _create-react-app_ configuration.
The config will be copied into your apps directory in a new `config` folder, and your scripts into a `scripts` folder. You can then edit your Webpack, Babel and ESLint configurations to your hearts’ content.
_Note_: Running eject is permanent! Only do so if you’re ready to go it alone (you know what you’re doing!).
### Error Messages
Another helpful feature included with _create-react-app_ are the built-in error messages that generate to both the console and browser window.
Typos and syntax errors will throw a compilation error. And if you have a bunch of errors, you’ll get an overlay breaking down each.
## Summary
There we go! We’ve seen how to start playing around with React in CodeSandbox & CodePen. And we’ve looked at how to setup a local development environment with _create-react-app_. You don’t need to spend any time installing dependencies or configuring a build process — you can jump right into coding!
## Conclusion
If you liked this blog post, follow me on [Twitter](https://twitter.com/RembertDesigns) where I post daily about Tech related things!
 If you enjoyed this article & would like to leave a tip — click [here](https://www.buymeacoffee.com/rembertdesigns)
### 🌎 Let's Connect
- [Portfolio](https://www.rembertdesigns.co/)
- [Twitter](https://twitter.com/RembertDesigns)
- [LinkedIn](https://www.linkedin.com/in/rrembert/)
- [Hashnode](https://rembertdesigns.hashnode.dev/)
- [Devto](https://dev.to/rembertdesigns)
- [Medium](https://medium.com/@rembertdesigns)
- [Github](https://github.com/rembertdesigns)
- [Codepen](https://codepen.io/rembertdesigns) | rembertdesigns |
1,183,852 | Javascript: Declarative vs Imperative programming style | These are programming paradigms: Declarative: tells What to do Imperative: tells How to... | 19,800 | 2022-09-04T13:57:27 | https://dev.to/urstrulyvishwak/js-declarative-vs-imperative-programming-style-5g57 | javascript, programming, functional, tutorial | These are programming paradigms:
Declarative: tells `What to do`
Imperative: tells `How to do`
**Example: Find the summation of salary for the employees with dept 'justCode'**
### Imperative Style:
```javascript
const employees = [
{id: 1, name: 'james', dept: 'admin', salary: 10000},
{id: 1, name: 'Tom', dept: 'finance', salary: 10000},
{id: 1, name: 'peter', dept: 'justCode', salary: 12500},
{id: 1, name: 'tunner', dept: 'justCode', salary: 14500},
];
const justCodeDept = [];
// filter employees based on dept name.
for (let i=0; i<employees.length; i++) {
if (employees[i].dept === 'justCode') {
justCodeDept.push(employees[i]);
}
}
// summation of justCodeDept employees.
let summation = 0;
for (j = 0; j<justCodeDept.length; j++) {
summation = summation + justCodeDept[j].salary;
}
console.log(summation);
```
### Declarative Style:
```javascript
const employees = [
{id: 1, name: 'james', dept: 'admin', salary: 10000},
{id: 1, name: 'Tom', dept: 'finance', salary: 10000},
{id: 1, name: 'peter', dept: 'justCode', salary: 12500},
{id: 1, name: 'tunner', dept: 'justCode', salary: 14500},
];
console.log(employees.filter(item => item.dept === 'justCode').reduce(((previousValue, currentValue) => previousValue += currentValue.salary), 0));
``` | urstrulyvishwak |
1,184,887 | when u rich | A post by murkings | 0 | 2022-09-05T00:56:46 | https://dev.to/murk/when-u-rich-alf | murk | ||
1,184,977 | A Trick to Further Breaking Down Angular Components | The usual Angular way of dealing with things is placing business logic in services and presentation... | 0 | 2022-09-05T05:40:21 | https://dev.to/bwca/a-trick-to-further-breaking-down-angular-components-1i8a | typescript, angular | The usual Angular way of dealing with things is placing business logic in services and presentation logic in components. It is a good approach, as all the heavy lifting is delegated to services which are imported and re-used by different components. Yet, sometimes components start acquiring logic of their own, with time developing private methods involved with data processing on component level.
This sometimes becomes a problem with unit tests, as private methods and state are not testable, unless some dirty hacks are performed. Besides, unit testing a component requires mounting a module to declare and all dependent modules, which in turn increases the time required to run unit tests.
In this article I want to share an approach to further breaking Angular components into parts, so they are easier to create in unit tests and more flexible.
Let's take a look at a simple counter component:
```typescript
@Component({
template: `<p>{{ calledTimes }}</p>
<p><button (click)="increment()">increment</button></p>`,
selector: 'app-root',
})
export class AppComponent {
protected _calledTimes = 0;
public get calledTimes(): number {
return this._calledTimes;
}
public increment(): void {
this._calledTimes++;
}
}
```
It is pretty straightforward: click the button, counter increments by one. Is there a way to further break it down to pieces? There is, think of `AppComponent` as of class `App`, decorated by `Component` to create a new class, which extends the `App`. So let's extract the class and call it a `Counter`:
```typescript
export class Counter {
protected _calledTimes = 0;
public get calledTimes(): number {
return this._calledTimes;
}
public increment(): void {
this._calledTimes++;
}
}
@Component({
template: `<p>{{ calledTimes }}</p>
<p><button (click)="increment()">increment</button></p>`,
selector: 'app-root',
})
export class AppComponent extends Counter { }
```
What benefits does it bring us? Well, we have just made a perfectly agnostic class, not bound by the Angular, so it can be unit tested without mounting any ngModules. And since it is merely a class, it is free to be extended and decorated. We can create new components based on it or extend it to a service.
What if we wanted a new component that does the same counting, by increments by a 100 instead of 1? Well, fairly easy to do it by extending the counter and overriding the increment method:
```typescript
@Component({
template: `<p>{{ calledTimes }}</p>
<p><button (click)="increment()">increment</button></p>`,
selector: 'app-increment-by-100',
})
export class IncrementBy100Component extends Counter {
public override increment(): void {
this._calledTimes += 100;
}
}
```
So the main idea the *Component derived class deals with gluing the class to the template while having no state of its own. It is like a combo of a stateful and stateless component. | bwca |
1,185,163 | Build custom front-end on top of Snowflake database | Create customised Snowflake dashboards with DronaHQ Create custom Snowflake dashboards with DronaHQ... | 0 | 2022-09-05T11:15:53 | https://www.dronahq.com/snowflake-dashboard/ | tutorial, database, lowcode, frontend | <h2><b>Create customised Snowflake dashboards with DronaHQ</b></h2>
<span style="font-weight: 400;">Create custom Snowflake dashboards with [DronaHQ](https://www.dronahq.com/) and visualize your business metrics and KPIs in real time. Import your data directly from Snowflake and create your own dashboards. </span>
<span style="font-weight: 400;">In this article, we will walk through the process of building a user interface for the snowflake dashboard on DronaHQ and</span><span style="font-weight: 400;"> integrating it with the employee database created in Snowflake to make an HR analytics dashboard.</span>
<h2><b>What’s Snowflake and Snowflake dashboard?</b></h2>
<span style="font-weight: 400;">Snowflake is a cloud computing-based data warehousing company based in Bozeman, Montana. The firm offers a cloud-based data storage and analytics service, generally termed "data warehouse-as-a-service". Snowflake enables you to build data-intensive applications without an operational burden. Snowflake dashboard </span><b>combines the use of metrics and key performance indicators to produce a visually appealing chart or design giving</b><span style="font-weight: 400;"> you and your team ready access to the information you need to continually improve business performance.</span>
<h2><b>Why build a dashboard for Snowflake?</b></h2>
<span style="font-weight: 400;">Snowflake is built specifically for the cloud, and it's designed to address many of the problems found in older hardware-based data warehouses, such as limited scalability, data transformation issues, and delays or failures due to high query volumes. </span>
<span style="font-weight: 400;">Here are five ways </span><b>snowflake visualization</b><span style="font-weight: 400;"> can benefit your business.</span>
<b>Performance and speed</b>
<span style="font-weight: 400;">The elastic nature of the cloud means if you want to load data faster or run a high volume of queries, you can scale up your virtual warehouse to take advantage of extra compute resources. Afterwards, you can scale down the virtual warehouse and pay for only the time you used.</span>
<b>Storage and support for structured and semistructured data</b>
<span style="font-weight: 400;">You can combine structured and less structured data for analysis and upload it to a cloud website without the need to modify or modify a non-static relationship schema first. Snowflake automatically prepares how data is stored and processed.</span>
<b>Concurrency and accessibility</b>
<span style="font-weight: 400;">Snowflake faces financial problems with its unique multicluster architecture: Queries from virtual warehouses never address queries from another, and each warehouse can go up or down as needed. Data analysts and data scientists can find what they need, if they need it, without having to wait for other loading and processing tasks to complete.</span>
<b>Seamless data sharing</b>
<span style="font-weight: 400;">The Snowflake architecture allows data sharing between Snowflake users. It also allows organizations to share data with any data buyer easily - whether a Snowflake customer or not - with student accounts that can be created directly on the user interface. This functionality allows the provider to create and manage a customer's Snowflake account.</span>
<b>Availability and security</b>
<span style="font-weight: 400;">Snowflake is still distributed across all available platforms in which it operates - AWS or Azure - and is designed to work continuously and tolerate partial and network failures with minimal impact on customers. SOC 2 Type II is certified, and additional security levels - such as HIPAA customer PHI data support, and encryption across all network communications - are available.</span>
<h2><b>Top Use Cases for Snowflake dashboard</b></h2>
<span style="font-weight: 400;">Popular Snowflake Use cases:</span>
<strong>1.</strong> <b>RETAIL TRANSACTION ANALYSIS</b>
In the retail environment, transactional data comes in large quantities. But quantity is not the only challenge for data analysis. Data must be kept fresh and up-to-date. Even with well-designed processes, your ETL and refresh can be time and resource constrained.
How snowflake can help address these problems:
<b>Abstraction</b><span style="font-weight: 400;">: Snowflake's processing abstraction with Warehouses allows computing power to scale to meet business needs without changing infrastructure.</span>
<b>Backups</b><span style="font-weight: 400;">: Snowflake Time Travel keeps 90-day backups that are saved periodically, so you can quickly roll back to an older version of your data file (or even "freeze" the table) if something goes wrong.</span>
<strong>2.</strong> <b>MAKING HEALTHCARE ANALYTICS</b>
<span style="font-weight: 400;">Trend research can help healthcare organizations improve patient outcomes by identifying conditions, behaviours, and environmental factors. To conduct this research, organizations need vast amounts of public health data.</span>
<span style="font-weight: 400;">Snowflake features that can help address these problems:</span>
<b>Datalake reads</b><span style="font-weight: 400;">: Snowflake can read from the Amazon S3 datalake. Organising disparate data will help organize information and allows to use of the External Tables feature to display data in a structured or semi-structured way.</span>
<b>Data Display</b><span style="font-weight: 400;">: Using the Variant Columns feature, Snowflake allows you to create semi-formatted tables and load all JSON and XML data into the database, object by object. This allows you to create views that are formatted and user-friendly.</span>
<strong>3.</strong> <b>FUELING MACHINE LEARNING</b>
<span style="font-weight: 400;">It would be great to have a crystal ball that could accurately predict changes in the stock market. While we all know that a crystal ball isn't realistic, you may be able to create a smart solution that will help reduce the risk of your choice and improve your chance of being right. To create this type of solution, you should plan to use historical inventory data and business data, reports and legislative data.</span>
<span style="font-weight: 400;">Snowflake offers several features that can help address this hypothetical applications needs:</span>
<b>Multi-cluster warehouse</b><span style="font-weight: 400;">: Allocating a large multi-cluster warehouse to your Snowflake team allows you to run multiple high-load queries simultaneously with fast responses.</span>
<b>Monetization</b><span style="font-weight: 400;">: The Snowflake Data Marketplace allows you to monetize the massive and valuable data set you have collected.</span>
<h2><b>How to use DronaHQ to build a Snowflake dashboard</b></h2>
<b>Step 1: Adding the Snowflake integration on DronaHQ</b>
<img class="alignnone wp-image-18508 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image13-1.png" alt="" width="1916" height="947" />
<span style="font-weight: 400;">First, go to the panel menu on the left and click on </span><b>“Connectors”. </b><span style="font-weight: 400;">It will show various [connectors](https://www.dronahq.com/integrations/) of different tools. On the top right corner just on the left of the profile icon, click on </span><b>“ + Connectors”.</b>
<img class="alignnone wp-image-18529 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image22-1.png" alt="" width="1920" height="936" />
<span style="font-weight: 400;">After clicking that button, we will see a list of databases and tools we can [integrate with dronaHQ](https://www.dronahq.com/integrations/). Click on </span><b>Snowflake</b><span style="font-weight: 400;">.</span>
<img class="alignnone wp-image-18514 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image6-3.png" alt="" width="1600" height="796" />
<span style="font-weight: 400;">After clicking on that button the required information such as the Name of your connector and details regarding the account of Snowflake needs to be filled in so that it can be integrated.</span>
<span style="font-weight: 400;">Here is how you can find the account name in Snowflake.</span>
<b>Steps for finding the account name</b><span style="font-weight: 400;">:</span>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Go to app.snowflake.com and log in using your credentials.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">On the Omnibox of the browser (where the URL of the page is displayed), you will find a profile menu, click on that.<img class="alignnone wp-image-18518 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image10-2.png" alt="" width="1898" height="1003" /></span><img class="alignnone wp-image-18527 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image20-1.png" alt="" width="542" height="38" /></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The URL is of the form app.snowflake.com/[REGION]/[LOCATOR]. And the account name is of the form [LOCATOR].[REGION]</span></li>
</ol>
<span style="font-weight: 400;">The rest of the details are pretty straightforward and can be found easily on Snowflake.</span>
<img class="alignnone wp-image-18510 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image2-4.png" alt="" width="1915" height="938" />
<span style="font-weight: 400;">After filling in all the details click on </span><b>“Test Connection”</b><span style="font-weight: 400;"> and wait for the ‘Connection is successful!’ message and then </span><b>“Save”</b><span style="font-weight: 400;"> the connector.</span>
<b>Step 2: Adding the dataset to Snowflake Database</b>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Go to the Snowflake app and from the left panel click on </span><b>“Data”.<img class="alignnone wp-image-18515 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image7-3.png" alt="" width="1898" height="906" /></b></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Now click on </span><b>“+ Database”</b><span style="font-weight: 400;"> in the top right corner and create a new Database.</span></li>
<li><span style="font-weight: 400;">After creating the database click on it and make a schema using the button on the top right.<img class="alignnone wp-image-18531 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image24.png" alt="" width="1907" height="926" /></span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">This will create a new table where you can add the data using worksheets.</span></li>
</ol>
<b>Step 3: Building the queries in the Snowflake connector</b>
<span style="font-weight: 400;">After connecting with the snowflake connector, add queries to it through the </span><b>“+ Add Query” </b><span style="font-weight: 400;">Button.</span>
<img class="alignnone wp-image-18512 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image4-3.png" alt="" width="1918" height="946" />
<span style="font-weight: 400;">You can add CRUD Queries like add employee, get employee, delete employee, update employee. Here’s an example for reading the data.</span>
<img class="alignnone wp-image-18519 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image11-1.png" alt="" width="1917" height="933" />
<span style="font-weight: 400;">After writing the query, click on </span><b>“Test Query”</b><span style="font-weight: 400;"> and if everything works fine then </span><b>“Save”.</b>
<b>Step 4: Building the UI for Snowflake Dashboard on DronaHQ</b>
<span style="font-weight: 400;">1. Create a new Blank App on DronaHQ.</span>
<img class="alignnone wp-image-18520 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image12-1.png" alt="" width="1920" height="945" />
<span style="font-weight: 400;">A screen like this will appear</span>
<img class="alignnone wp-image-18521 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image14.png" alt="" width="1920" height="949" />
<span style="font-weight: 400;">2. Let’s start building our UI, On the left panel, click on “Control”, and a list of various [controls](https://www.dronahq.com/controls/#:~:text=The%20Summary%20control%20is%20an,hidden%20property%20in%20Freeflow%20editor.) will appear</span>
<img class="aligncenter wp-image-18523" src="https://www.dronahq.com/wp-content/uploads/2022/07/image16-1-e1659089374210.png" alt="" width="424" height="696" />
<span style="font-weight: 400;">3. Drag and drop 2 dashboard, 1 table grid and 2 chart controls to our screen. After arranging the controls, Your app will look like this:</span>
<img class="alignnone wp-image-18530 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image23.png" alt="" width="1920" height="944" />
<span style="font-weight: 400;">These UI controls currently display default data. Up next we will bind them to our Snowflake Database.</span>
<span style="font-weight: 400;">4. Now we will add data to all the controls. </span>
<span style="font-weight: 400;">a. Let’s start with the </span><b>table grid control (</b><a href="https://community.dronahq.com/t/table-grid-control/190"><b>Guide for Table grid control</b></a><b>)</b><span style="font-weight: 400;"> first, click on the table grid control and then on the data icon on the right panel. After that click on the </span><b>“Connectors”</b><span style="font-weight: 400;"> button, then on </span><b>“Select Connector”</b><span style="font-weight: 400;"> because we need to fetch the data from Snowflake Database.</span>
<img class="alignnone wp-image-18528 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image21-1.png" alt="" width="1999" height="980" />
<span style="font-weight: 400;">A list of connectors will appear, from that select </span><b>the Snowflake connector. </b><span style="font-weight: 400;">Various queries will be displayed that we made at the time of integrating the snowflake connector.</span>
<img class="alignnone wp-image-18516 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image8-1.png" alt="" width="1919" height="945" />
<span style="font-weight: 400;">We will select the </span><b>GetAllEmployee</b><span style="font-weight: 400;"> Query to fetch all the data from the table and in the properties of control choose the columns from which you want to show(all columns will be selected by default). This will display the data in the table grid.</span>
<span style="font-weight: 400;">You can also use a similar control called Data Store control which allows you to fetch data from the database in one API call </span><b>(</b><a href="https://community.dronahq.com/t/data-store-control/183"><b>Guide for Data Store Control</b></a><b>)</b><span style="font-weight: 400;">.</span>
<span style="font-weight: 400;">b. Now let’s see for </span><b>chart controls (</b><a href="https://community.dronahq.com/t/charts-control/907"><b>Guide for Chart control</b></a><b>)</b>
<span style="font-weight: 400;">Go to the Data tab of the chart control and click on the connectors option. Then click on the </span><b>“select connector”</b><span style="font-weight: 400;"> button.</span>
<img class="alignnone wp-image-18513 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image5-3.png" alt="" width="1920" height="936" />
<span style="font-weight: 400;">A list of connectors will appear, click on your snowflake connector</span>
<img class="alignnone wp-image-18525 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image18-1.png" alt="" width="1916" height="951" />
<span style="font-weight: 400;">It will display a list of queries which are there in our connector</span>
<img class="alignnone wp-image-18516 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image8-1.png" alt="" width="1919" height="945" />
<span style="font-weight: 400;">We will select </span><b>GetAllEmployee</b><span style="font-weight: 400;"> Query to fetch all the data from the database. </span>
<span style="font-weight: 400;">After clicking on GetAllEmployee query it will show environments, click on continue without changing it. </span>
<span style="font-weight: 400;">After that, you will see the screen to select which columns to bind, select the columns which you want to use in your charts and click on Test & finish.</span>
<img class="alignnone wp-image-18511 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image3-4.png" alt="" width="1915" height="897" />
<span style="font-weight: 400;">Then go to the properties tab of chart control and search for the data section.</span>
<img class="alignnone wp-image-18517 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image9-2.png" alt="" width="1920" height="943" />
<span style="font-weight: 400;">Select the chart type you want to show and the columns for which you want to display data and that's it!</span>
<span style="font-weight: 400;">c. Let’s look out for our last control which is</span><b> dashboard control. (</b><a href="https://community.dronahq.com/t/dashboard-control/679"><b>Guide for Dashboard control</b></a><b>)</b>
<span style="font-weight: 400;">Dashboard control has options to add the data to 5 components, i.e. title text, header, footer, progress and text.</span>
<img class="alignnone wp-image-18522 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image15-1.png" alt="" width="1920" height="944" />
<span style="font-weight: 400;">Add data to the component using a connector, the process is the same as that of the chart control. You can use a custom formula with a connector by first adding the data through the connector and then in the custom formula edit the formula as shown below.</span>
<img class="alignnone wp-image-18526 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image19-1.png" alt="" width="1905" height="904" />
<span style="font-weight: 400;">BINDAPI </span><b>(</b><a href="https://community.dronahq.com/t/bindapi-function/779"><b>What is BINDAPI</b></a><b>)</b><span style="font-weight: 400;"> is the formula which is inherited from the connector you just need to add some functions such as count, multiply etc to make it look good.</span>
<span style="font-weight: 400;">So our </span><b>final dashboard</b><span style="font-weight: 400;"> will look somewhat like this, although it totally depends on how you’re visualizing your data.</span>
<img class="alignnone wp-image-18509 size-full" src="https://www.dronahq.com/wp-content/uploads/2022/07/image1-4.png" alt="" width="1825" height="902" />
<span style="font-weight: 400;">Congratulate yourself! Finally making a snowflake dashboard.</span>
<h2><b>Key features of Snowflake dashboards that are built with DronaHQ</b><span style="font-weight: 400;">:</span></h2>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Integrate database: Integrate with any database with the help of various connectors. DronaHQ supports up to 50+ ready connectors to popular databases like MongoDB, Firebase, Airtable, Influx and lots more. So while you are bringing Snowflake data to the front end, you can also join that data to information from other sources like Google Sheets, MS SQL, or Salesforces. </span><a href="https://community.dronahq.com/t/connectors-overview/402"><b>See all connectors here ></b></a><span style="font-weight: 400;"> </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Controls: Dashboard control made data interpretation convenient and easy. The table grid control provides all the CRUD operations for the Snowflake dashboard. Chart control offers various options for visualization that are easy to modify and make data visualizations easy. Not only this you can use form-based control, map control, donut chart controls and many more to make your application interactive. </span><a href="https://www.dronahq.com/controls/#:~:text=Summary,hidden%20property%20in%20Freeflow%20editor."><b>See all controls here ></b></a><span style="font-weight: 400;"> </span></li>
<li aria-level="1">Sharing Options: Collaborators can control the permissions of who can access the app and edit the app on the basis of roles. Permissions are an important part of healthy team collaboration: with the right permissions, you can ensure that only certain people are permitted to change the information on your bases.</li>
<li aria-level="1">Mobile + Web App output: All the micro-apps built on DronaHQ are also available in an employee portal mobile app.</li>
<li aria-level="1">Unlimited end-users: All plans of DronaHQ can have unlimited users, even with the free developer plan.</li>
</ol>
<h2><b>Build amazing dashboards with DronaHQ</b></h2>
<span style="font-weight: 400;">In this article, we have learned about </span><b>Snowflake Dashboard</b><span style="font-weight: 400;"> and their usage. This article also provided in-depth knowledge about building a snowflake dashboard and its key features, its components, and the steps involved in creating Snowflake Dashboards.</span>
<span style="font-weight: 400;">Now build your own dashboard on [DronaHQ](https://www.dronahq.com/) using any database or app be it a customer order dashboard, inventory dashboard, or salesforce dashboard.</span>
<span style="font-weight: 400;">Thank you!</span>
| aaikansh_22 |
1,185,183 | Deploying Next.js app on Netlify [Building Personal Blog Website Part 4] | Now it’s finally time to put your app online! First you need to push your Git repository (create... | 23,655 | 2022-09-05T11:52:11 | https://www.hwlk.dev/blog/personal-blog-tutorial-4 | strapi, nextjs, javascript, headlesscms | Now it’s finally time to put your app online!
First you need to push your *Git* repository (create automatically while creating *Next.js* app) to GitHub. Go to [github.com](http://github.com) and log in (or create an account if you didn’t do this already). Create a new repository clicking on the upper right corner:

Add the name, decide whether the repository should be private or public and click *Create*. You should see something like this

Now go to the *Terminal* and in your app’s root folder commit all the files (if you put your environment variables into `.env` file then remember to add it to `.gitignore`). Now let’s push your repository to *GitHub*:
```bash
git remote add origin YOUR_GITHUB_REPOSITORY
git push -u origin main
```
When it’s through you should be able to see your files in *GitHub*:

Now you’re ready to deploy it to *Netlify*. Go to [netlify.com](http://netlify.com) and log in. I suggest logging in with *GitHub*. You should land on your dashboard where you can create a new site:

Select *“Import an existing project”* and then connect to *GitHub* and select your repository:

You should see something like this:

Leave all of the default values and click *Deploy site*. You’ll see that deployment is in progress. Wait a few minutes until it is done.

---
In case the deploy fails just click on the failed deploy and try doing it again. *Railway* sometimes has an issue of rejecting connections at the beginning.

---
When it’s done you’ll see the address of you new website:

Go to this address and you should see your work online!

Now whenever you push something to your repository *Netlify* will automatically rebuild your app. Remember! Changing something in your CMS does not trigger rebuilding so if you want to have fresh content - you’ll have to rebuild your app on *Netlify* manually on the *Deploys* tab on your *Netlify* dashboard:

Let’s make some changes to the site’s layout to see this automatic rebuild in action.
What your site lacks as of this moment is a nice navigation bar, basic footer and some responsive adjustments for the mobile. Create two new components in `components` directory:
`Navbar.jsx`
```bash
import Link from "next/link";
import React from "react";
const Navbar = () => {
return (
<nav className="z-0 w-full">
<div className="z-10 bg-blue-500 shadow">
<div className="max-w-7xl mx-auto px-2 sm:px-4 lg:px-8">
<div className="flex flex-col sm:flex-row items-center justify-between gap-4 p-2 font-mono">
<Link href="/">
<a className="flex-shrink-0">
<h1 className="font-bold text-xl uppercase text-white">
My Personal Blog
</h1>
</a>
</Link>
<div>
<div className="flex gap-2">
<Link href="/">
<a className="px-3 py-2 text-sm font-medium text-white hover:underline hover:underline-offset-4 hover:text-white hover:font-bold transition duration-150 ease-in-out cursor-pointer focus:outline-none focus:text-white focus:bg-gray-700 ">
Home
</a>
</Link>
<Link href="/about">
<a className="px-3 py-2 text-sm font-medium text-white hover:underline hover:underline-offset-4 hover:text-white hover:font-bold transition duration-150 ease-in-out cursor-pointer focus:outline-none focus:text-white focus:bg-gray-700 ">
About
</a>
</Link>
<Link href="/contect">
<a className="px-3 py-2 text-sm font-medium text-white hover:underline hover:underline-offset-4 hover:text-white hover:font-bold transition duration-150 ease-in-out cursor-pointer focus:outline-none focus:text-white focus:bg-gray-700 ">
Contact
</a>
</Link>
</div>
</div>
</div>
</div>
</div>
</nav>
);
};
export default Navbar;
```
and `Footer.jsx`
```bash
import React from "react";
const Footer = () => {
return (
<footer className="bg-blue-500 py-8 w-full">
<div className="flex flex-wrap justify-center">
<div className="text-sm text-white font-mono font-semibold py-1">
Copyright © YOUR_NAME 2022
</div>
</div>
</footer>
);
};
export default Footer;
```
Now adjust your `pages/_app.js` file to use those newly created components
```bash
function MyApp({ Component, pageProps }) {
return (
<div className="flex flex-col items-center bg-white">
<Navbar />
<Component {...pageProps} />
<Footer />
</div>
);
}
```
And finally make small changes in `pages/index.js` to better display the post list on various screen resolutions:
```bash
export default function Home({ posts }) {
return (
<section className="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-3 gap-4 my-8 mx-4">
{posts.map((post) => (
<BlogPostPreview post={post} key={post.attributes.slug} />
))}
</section>
);
}
```
Now commit all of the files and push the changes to GitHub.
```bash
git push
```
Wait a few minutes and check your website hosted on Netlify.

Looks better, doesn’t it?
And that’s it for today - your website starts going somewhere! In the next part of the series you’ll implement a simple *Show all posts for specific Tag* feature. See you then! | hwlkdev |
1,185,613 | PHP+Nginx+Docker | At one time or another I have to create a project from scratch, but sometimes I end up wasting a lot... | 0 | 2022-09-05T20:01:23 | https://dev.to/walternascimentobarroso/phpnginxdocker-nih | php, docker, nginx, xdebug | At one time or another I have to create a project from scratch, but sometimes I end up wasting a lot of time just creating the environment, so to make it easier I'll leave something ready with php and nginx using docker
## Readme
Start by adding a `README` to the project and as the project progresses you will edit it until it looks really cool
```shell
touch README.md
```
## Makefile
Now let's organize docker in a folder for that we will use the `Makefile`
```shell
touch Makefile
```
## Docker-composer
And let's create a docker folder and inside it for now only the `docker-composer.yml` file
```shell
mkdir docker && touch docker-composer.yml
```
In my `docker-compose.yml` we will only have **nginx** and **php-fpm** for now
```yml
version: "3.9"
name: default
services:
nginx_default:
container_name: nginx_default
image: nginx:1.17.8
ports:
- 80:80
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
- ../:/var/www
links:
- php_default
php_default:
container_name: php_default
build: ./php
working_dir: /var/www
volumes:
- ../:/var/www
```
_💡Remembering that I like to rename the container's name to have a certain organization, in this one as the project is default, I'll leave `{imagename}_default`_
_💡Remembering that if you put a `.env` file with the project name, avoid creating orfan containers_
```shell
COMPOSE_PROJECT_NAME=mvc
```
If you prefer (I prefer) add the name to your project and it becomes simpler and more organized
## Nginx
One more configuration is the default.conf that we will also put in the docker folder
```conf
server {
listen 80;
server_name default.localhost;
error_log /var/log/nginx/error.system-default.log;
access_log /var/log/nginx/access.system-default.log;
root /var/www/public;
index index.html index.htm index.php;
charset utf-8;
location / {
#try to get file directly, try it as a directory or fall back to modx
try_files $uri $uri/ @mod_rewrite;
}
location @mod_rewrite {
#including ? in second rewrite argument causes nginx to drop GET params, so append them again
rewrite ^/(.*)$ /index.php?route=/$1;
}
# You may need this to prevent return 404 recursion.
location = /404.html {
internal;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php_default:9000;
fastcgi_read_timeout 6000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
```
_💡Remembering that I'm changing the names to default_
## Xdebug and Composer
As I like to debug with **xdebug**, so I'll leave it ready together with composer, so that the php dockerfile looks like this
```Dockerfile
# Image and version
FROM php:7.4-fpm
# Call PHP images script `docker-php-ext-install` and install language extensions
RUN docker-php-ext-install pdo_mysql
# copy the Composer PHAR from the Composer image into the PHP image
COPY --from=composer /usr/bin/composer /usr/bin/composer
# Install xdebug
RUN pecl install xdebug && docker-php-ext-enable xdebug
COPY xdebug.ini /usr/local/etc/php/conf.d/xdebug.ini
```
and the `xdebug.ini` file that will be copied to the container
```ini
[xdebug]
xdebug.mode=debug
xdebug.start_with_request=yes
xdebug.client_host=host.docker.internal
```
Before finishing, let's adjust the make, looking like this:
```Makefile
up:
docker-compose up -d
stop:
docker-compose stop
destroy:
docker-compose down
build:
docker-compose up --build -d
```
Here's a little secret, as the docker project is in a separate folder, so for the makefile to find, we can simply create a `.env` file and add our la file, looking like this:
```env
# Docker
COMPOSE_FILE=docker/docker-compose.yml
```
## Composer.json
and finally let's create a `composer.json` file just to inform that I use psr-4, so the autoload is ready
```json
{
"autoload": {
"psr-4": {
"App\\": "app/"
}
}
}
```
and let's create an `index.php` just to see if everything is working fine remembering that as in the **nginx** configuration file I am looking for a folder called public, so we have to create it in the public folder
```shell
mkdir public && touch index.php
```
and in `index.php` we will have
```php
<?php
require __DIR__ . '/../vendor/autoload.php';
phpinfo();
```
## Conclusion
now we just have to do
```
make build
make composer
```
## Extras
### gitignore
before submitting the project to git remember to add a `.gitignore` file and inform the vendor folder
```
touch .gitignore
```
and as I always use **vscode** I will also inform the folder that it automatically generates
```
vendor
.vscode
```
### Editconfig
I also like to use the **editorconfig**, but before using it I like to activate the format on save of vscode First go to vscode settings
`Code > Preferences > Settings e ative a função “Format on Save”`
And finally, install the editorconfig extension.
For now we are going to use this configuration
{% gist https://gist.github.com/walternascimentobarroso/7c6ac0e38187ee59ee78bc7883c0327a %}
### LICENSE
finally, as this project is open I will leave the license as MIT
## Project
If you want to use this project as a template for your next projects go to:
https://github.com/walternascimentobarroso/php-nginx-docker
***
## Thanks for reading!
If you have any questions, complaints or tips, you can leave them here in the comments. I will be happy to answer!
😊😊 See you! 😊😊
***
## Support Me
[Youtube - WalterNascimentoBarroso](https://www.youtube.com/channel/UCXm0xRtDRrdnvkW24WmkBqA)
[Github - WalterNascimentoBarroso](https://github.com/walternascimentobarroso)
[Codepen - WalterNascimentoBarroso](https://codepen.io)
| walternascimentobarroso |
1,185,960 | Ben 10- Omnitrix | Hello Developers. I'm new to this kind of community thing. Hope I'll get to know about this soon.... | 0 | 2022-09-06T09:52:40 | https://dev.to/thisissherlock1/ben-10-omnitrix-fkb | javascript, css | Hello Developers. I'm new to this kind of community thing. Hope I'll get to know about this soon.
First let me know, What is your favourite cartoon character in your childhood? The character must brought you back to your childhood and make you feel so Nostalgic?
For me, It's **BEN 10**. Hope You know about that show. If you do so, Put your seat belts. I am going to take you back in the time.
Just to make this a bit tricky one, I'm gonna shoot you with some easy questions. If you know the answers, Feel free to answer them in the comment section.
- Who is the main antagonist in _Ben 10 Classic_?
- Who is the creator of _Omnitrix_?
- What is the name of this Alien?

Here are some sample screenshots of the website.🧬


**Click the below link to Get the Omnitrix (Virtual) Hands on Experience.**
[Link🔗](https://ben-10-watch.vercel.app/)
| thisissherlock1 |
1,186,117 | Remote Debugging Webpages In iOS Safari | Safari is one of the most popular web browsers. Developed and promoted by Apple , it is based on the... | 0 | 2022-09-06T13:41:37 | https://www.lambdatest.com/blog/remote-debugging-webpages-in-ios-safari/ | debugging, ios, safari, webpages | Safari is one of the most popular web browsers. Developed and promoted by Apple , it is based on the WebKit engine. The first version of the browser was released in 2003 with Mac OS X Panther. With the launch of the iPhone in 2007, a mobile version of the browser has been included in [iOS ](https://en.wikipedia.org/wiki/IOS)devices as well.
Unless specifically changed by the user, Safari is the default browser on all Apple devices and it is one of the main reason why it is still popular among internet users.
Developing web pages or debugging hybrid applications is tough. However, to the utter delight of all computer science professionals, Apple offers a remote ‘Web Inspector’ for debugging.
***This article explains the [emulator vs simulator](https://www.lambdatest.com/blog/emulator-vs-simulator-vs-real-device/?utm_source=devto&utm_medium=organic&utm_campaign=sep06_rd&utm_term=rd&utm_content=blog) vs real device differences, the learning of which can help you select the right mobile testing solution for your business.***
Remote debugging, is debugging a piece of code by connecting the application running remotely with your environment of development. Generally, developers do this using a debugger which has support for remote debugging, and a small debug server running on a remote machine.
This small debug server must allow the debugger to interact with the code running on the remote machine as if it were running on the same machine.
Remote debugging Safari can be painful especially if you are not a Mac user. Web developers, generally are familiar with the tools available to them while local page testing, but face problems while debugging applications developed for mobile devices such as the iPad or iPhone.
A possible solution to the problem is remote debugging — making use of the same tools as you would have used while performing the task on a desktop, except for the fact that now you are connected to your mobile device. Debugging safari on iOS requires a minimum of Safari 6. However, there is a bad news for Windows and Linux developers here as the latest version of Safari is not available at all for Linux while for Windows only Safari 5 is available.
***Perform manual or automated cross browser testing on 3000+ browsers online. Deploy and scale faster with the most powerful cross [browser test](https://www.lambdatest.com/?utm_source=devto&utm_medium=organic&utm_campaign=sep06_rd&utm_term=rd&utm_content=webpage) tool online.***
## Remote Debugging iOS Safari on OS X
Before starting with the debugging process, you would need a device running iOS. It may be any device like an iPad or an iPhone which can be connected to a Mac computer by USB. Test on safari browser version 6+ must be installed on the computer.
The next step demands you to enable ‘Web Inspector’ on your iOS device. This can very easily be done by going to **Settings > Safari > Advanced**. Toggle the ‘Web Inspector’ so that it gets enabled in case it was not by default.
After successfully enabling ‘Web Inspector’, you should also enable the ‘Develop’ menu in Safari on your Mac computer. This can be done by going to **Safari > Preferences > Advanced** and checking the checkbox for the Show Develop menu in the menu bar. In case it is already enabled, you do not need to do anything.
Further, you need to open the web page you want to debug on your computer. Make sure that your iOS device is plugged into it. Go to Develop > iOS Device Name in the desktop Safari browser. Click on the page you want to debug.
Finally, you can view and update the Document Object Model, and have access to the JavaScript console and other options and features.
***Need a great solution for the [Safari browser for Windows](https://www.lambdatest.com/safari-browser-for-windows?utm_source=devto&utm_medium=organic&utm_campaign=sep06_rd&utm_term=rd&utm_content=webpage)? Forget about emulators or simulators — use real online browsers.***
## Remote debugging iOS Safari on Windows and Linux
Although not many options are available for remote debugging Safari from Windows or Linux, the situation is not as hopeless as it seems. A popular way out is to call an application called JSConsole to your rescue. The application works in an interesting way by inserting a script tag into the target web page that overrides the console behavior.
To begin with, go to jsconsole.com and execute: listen in the prompt. This will present you with a unique session ID and a script tag which you further insert into your mobile web page.
Hence, any console output that your mobile page generates will be shown on the console open in your desktop web browser. This includes all the errors too. Thus, this gives you an alternate path to remotely debug iOS Safari. Although, it is not a replacement for a full web inspector, in fact, stands nowhere near to it, but is a good way out when you don’t have access to Mac.
Now test Safari on Linux online & start your free testing!!!
Thus, this was a quick overview of the process of remotely debugging iOS Safari both on Mac as well as non-Mac platforms. Happy debugging!
**_Do you use a Mac and want to run the test in Internet Explorer? This article explores how to test [Internet Explorer on Mac](https://www.lambdatest.com/blog/test-internet-explorer-for-mac/?utm_source=medium&utm_medium=organic&utm_campaign=mar27_bh&utm_term=bh&utm_content=blog)._**
| saifsadiq1234 |
1,186,326 | Building my blog with AWS Amplify and Next.JS | Is it me or everyone has the experience of going back and forth between having a custom personal blog... | 19,716 | 2022-09-26T21:17:53 | https://www.farmin.dev/posts/beafb161-bc14-498f-90d3-1cc4366dfe52 | nextjs, amplify, webdev, tutorial | Is it me or everyone has the experience of going back and forth between having a custom personal blog or writing on a platform like `Dev.to`?!
I changed my mind multiple times in the last decade, I had my blogs done with WordPress, Jekyll, and Gatsby but then I moved to Dev.to and Medium and to be honest I might have put more effort into building blog sites rather than writing the blogs.
This time I want to have both, I'm gonna keep writing on Dev.to but also I want to re-build my new blog because I want to have a place to try new tech and enjoy building stuff. (instead of browsing on different social media at least I'd learn something eh?)
## AWS Amplify
I'm a huge AWS and Serverless fan and because of that, I decided to have my blog built and deployed by AWS . Plus I'm an [AWS Community Builder](https://aws.amazon.com/developer/community/community-builders/) so it makes sense to have something to explore and build related to that.
So how exactly? I could just write my backend with services such as API Gateway, Lambda, and DynamoDB and host my frontend on S3, but I decided to make it a little bit more powerful (even though it's a blog and I don't need to) by using [Amplify](https://aws.amazon.com/amplify/) as my full-stack solution to build, test and deploy the app.
AWS Amplify is an amazing tool that I can easily provision my backend with IoC (infrastructure as code) and have my pipelines ready to work and connected to GitHub.
Amplify offers this great tool called [Amplify Studio](https://aws.amazon.com/amplify/studio/) where you can graphically build your backend such as adding services or modeling your DB. I'm going to use it for:
- build my data model
- add file storage (S3 bucket)
- add authentication (If needed)
- add content (I can generate some mock data in case too)
and more importantly, I'm gonna use it as my "headless CMS" since I can easily write my blogs there instead of implementing an editor on my frontend or writing directly on the DB!!
There is also Amplify Service in AWS Console that would help to configure my pipelines and environments and connect them to my GitHub branches.
It is worth mentioning that Amplify has a [CLI](https://docs.amplify.aws/cli/) tool that offers everything that we can do (and more) on the Studio and Console in CLI and it has a lot of options for scaffolding the code.
In general, I find two ways to implement the backend with Amplify:
1. Configuring and changing things on the Studio or Web Console (Getting deployed with CloudFormation) and then `pull` them on your local to have them as code and commit them to the Git
2. Implementing everything locally using Amplify CLI or just manually and `push` them to AWS (AWS CloudFormation Deployment) then commit to the Git
## Next.js
On the Frontend side, I can easily scaffold a React app using AWS CLI but it uses [create-react-app](https://create-react-app.dev/) and will be deployed as SPA on S3.
Since I want to build a blog, **SEO** is a huge deal to me and I need **SSR**(server side) app to help me in that. because of this, my choice would be Next.js.(I'd go with SSG)
At this moment, Next.js is on version 12 but I will use version 11 since I know there are issues with Amplify and Next.js@12 (I will mention in another post)
## Enough talking, let's code!
### Backend
Let's head to Amplify Service in AWS Console [here](https://us-east-1.console.aws.amazon.com/amplify) and create a new app (build a new app). Choose a name and click next. Now Amplify will prepare everything and creates the Amplify studio and some resources needed for the project.
After finishing setting up, you should see this page and you can launch your Studio.

Now we can do some data modeling easily. I just need a table for my post so I'll create one with this schema and deploy it.

After successful deployment, I have my backend fully ready to use this schema. what it means is that I have my APIs ready to do CRUD operations on the DB using this schema.

If you're wondering what has happened in the background, I have a plan to write a blog about `What do we build on our AWS account with Amplify Data Model deployment?`
The last thing to do in the Studio is to make some mock data to use on the frontend. To be honest, this is an amazing feature that will save a lot of time in the development process. Browse to `content` menu on the right and click on `Actions` => `Auto-generate data` and generate 10 rows of data.
Awesome, now I have some mocked data and actually, I will use this dashboard later as my "headless CMS" feature to write my blogs.

It's time to move to frontend and build my app and configure it to use my APIs.
### Frontend
Yet, I don't have any code on our local and this is the time to build `Full-Stack` project locally.
Let's create a Next.JS app and then transform it into a full-stack app by pulling backend configuration.
Following [Next.JS docs](https://nextjs.org/docs/getting-started#automatic-setup), I'll create our app running:
```
> yarn create next-app
? What is your project named? › my-blog
```
after setup is finished, if you run `yarn dev`, you should be able to see the app running on `localhost:3000`.

then make sure I have [Amplify CLI](https://docs.amplify.aws/cli/start) installed on your computer.
```bash
npm install -g @aws-amplify/cli
```
### Downgrade Next.JS and Webpack
As of today, [Amplify doesn't support Next.JS@12 fully](https://github.com/aws-amplify/amplify-hosting/issues/2343) and because of that, it'd be better and safer to downgrade to version 11 (of course we'd miss some of the awesome features from v12 but that's fine). So I'm making it `"next": "11.1.4"` in the `package.json`. Since we're downgrading it to v11, we need to downgrade usage of `webpack` from 5 to 4 and for doing that we need to add `webpack5: false` to our `next.config.js` file. I also add `SVG` support to that file for the future and it looks like this:
```js
/** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
swcMinify: true,
webpack5: false,
webpack: (config) => {
config.module.rules.push({
test: /\.svg$/,
use: ["@svgr/webpack"],
});
return config;
},
};
module.exports = nextConfig;
```
### Connecting Frontend to our backend
As you could see on your Amplify Studio, there's a command for pulling the backend code and everything you need to connect your webapp to backend services.
```bash
amplify pull --appId {you-app-id} --envName staging
```
if this is the first time, it will authenticate you on the web and goes back to the terminal. You'll be prompted for your IDE, language, framework and source folder.

after the setup finishes, I'd have a lot of file changes on the repo. Let's commit the code to Git to make sure we keep track of the work that I'm doing.
Alright, let's do some coding and get the mocked blog posts that we have and show it on the blog.
let's install `Amplify` library: `yarn add aws-amplify`
open `_app.js` file and add amplify setup:
```js
import Amplify from "aws-amplify";
import "../styles/globals.css";
import config from "../aws-exports";
Amplify.configure({
...config,
ssr: true,
});
function MyApp({ Component, pageProps }) {
return <Component {...pageProps} />;
}
export default MyApp;
```
Ok, now we can query some data in our `index.js` page to show it on the main page of our web app.
```js
import React from "react";
import Head from "next/head";
import Link from "next/link";
import styles from "../styles/Home.module.css";
import { DataStore } from "aws-amplify";
import { Post } from "../models";
export default function Home() {
const [posts, setPosts] = React.useState([]);
React.useEffect(() => {
async function fetchPosts() {
const postData = await DataStore.query(Post);
setPosts(postData);
}
const subscription = DataStore.observe(Post).subscribe(() => fetchPosts());
fetchPosts();
return () => subscription.unsubscribe();
}, []);
return (
<div className={styles.container}>
<Head>
<title>My Blog</title>
</Head>
<main className={styles.main}>
<h1>My Blog</h1>
{posts.map((post) => (
<Link key={post.id} href={`/posts/${post.id}`}>
<a>
<h2>{post.title}</h2>
</a>
</Link>
))}
</main>
</div>
);
}
```
Amazing, now I see all my post titles.

I have everything I need to write/move my blogs now, I just need to do some coding on the frontend side to show them and style them as I want.
Let's commit the code to GitHub so I can build my pipeline for the deployment.
## Continuous Deployment and Pipeline
In the next post, I'm gonna set up my [CD](https://en.wikipedia.org/wiki/Continuous_deployment).
| farminf |
1,186,695 | useContext in React | We use React Context to manage the state at a global level. Components can get data from the React... | 19,685 | 2022-09-07T05:02:00 | https://dev.to/savvyshivam/usecontext-in-react-5c9h | webdev, tutorial, beginners, react | We use React Context to **manage the state at a global level**. Components can get data from the React context **no matter how far down the components tree they are**.
The very first step used in creating Context is to import “**createContext**”

## Context
Context lets the information be passed to the desired component **without dragging it through multiple middle components** and also when the same information is required by various components. It makes the **parent component** let certain information be available to any component in the component tree below it, **without the use of repeated props.**
The disadvantage of only passing props is that, in situations where the user would want two components’ state to change together and to do that the user will have to remove state from both of them, move it to their closest common parent, and then pass it down to them via props. But the closest common parent could be far away from the component, in the tree, that needs data. So **lifting the state up that high** could lead to a condition known as **prop drilling.**

**createContext returns a context object. Components can read context by passing it to useContext().**
## useContext :
It is a React Hook that lets you read and accept a context.
To import this hook we need to write the following code in the top of the code file :

Instead of using **useContext(ThemeContext)** again and again in each component, we define a “**Custom Hook**” i.e. a function that does this for us called **useTheme**

We are now trying to create a simple program which will be able to **change the color of the background** and **the text** upon clicking a button.
The very first thing that we will do is create state variables and initialize the state of the state variable.
Here, we initialize the theme state variable to “**white**”

In this next piece of code, to **pass context** to the children property, we will **wrap it between the context provider and also pass the values “theme” and “setTheme”**. When the children property is wrapped between the context Provider, that component now will be able to **access the mentioned state variables**.

The final step would be to **export** the useTheme and ThemeProvider

Now, in the App.js file we import the useTheme to be able to use it further into the program

Here, we create two context variables, with names **theme and setTheme**. The names of these variables are **independent** of the names of the variables used in the theme-context.js program.

The ***getStyle*** function returns a conditional statement where it’s programmed to change the state of the state variable according to the conditions mentioned.

Here by using the onClick event handler we update the theme and output the desired result.
## Output :


The link for the entire code used above is given below:
[https://codesandbox.io/s/use-context-i29vb5](https://codesandbox.io/s/use-context-i29vb5)
| savvyshivam |
1,186,716 | 606. Leetcode Solution in cpp | /** * Definition for a binary tree node. * struct TreeNode { * int val; * TreeNode... | 0 | 2022-09-07T05:50:35 | https://dev.to/chiki1601/606-leetcode-solution-in-cpp-3bgk | chiki1601, misspoojaanilkumarpatel, cpp | ```
/**
* Definition for a binary tree node.
* struct TreeNode {
* int val;
* TreeNode *left;
* TreeNode *right;
* TreeNode() : val(0), left(nullptr), right(nullptr) {}
* TreeNode(int x) : val(x), left(nullptr), right(nullptr) {}
* TreeNode(int x, TreeNode *left, TreeNode *right) : val(x), left(left), right(right) {}
* };
*/
class Solution {
public:
string tree2str(TreeNode* t) {
return dfs(t);
}
private:
string dfs(TreeNode* root) {
if (!root)
return "";
const string& rootStr = to_string(root->val);
if (root->right)
return rootStr + "(" + dfs(root->left) + ")(" + dfs(root->right) + ")";
if (root->left)
return rootStr + "(" + dfs(root->left) + ")";
return rootStr + "";
}
};
```
#leetcode
#challenge
Here is the link for the problem:
https://leetcode.com/problems/construct-string-from-binary-tree/ | chiki1601 |
1,186,836 | How to build an amazing wallet component using next.js wagmi rainbowkit tailwindcss | 1.overview I have been building some web3 projects. All of these projects would need a... | 0 | 2022-09-07T09:04:41 | https://dev.to/coffiasd/how-to-build-an-amazing-wallet-component-using-nextjs-wagmi-rainbowkit-tailwindcss-4hi9 | ## 1.overview
I have been building some web3 projects. All of these projects would need a wallet connect UI.That's why i write this post. Hand in hand, I will show you how to build an amazing wallets connect UI using next.js tailwindcss daisyUI wagmi and the most important rainbowkit. Here we go!



## 2.tech stack we gonna use.
- next.js
- wagmi(react hooks)
- ranbowkit(UI library)
- tailwindcss(css framework)
- daisyUI(UI library)
## 3.next.js setup
What's next.js ?
Next.js is a react framework for production.Therefore if you never used react.js before.Then you need to learn react.js first.I hope you did.Next.js has all the tools we need to make the web faster.Next.js has some features like zero config、fast refresh、SSR、typescript support、ect...
### 3.1 next.js setup
```js
npx create-next-app@latest --typescript
# or
yarn create next-app --typescript
```
### 3.2 add tailwindcss library
Tailwindcss is css framework that help us reduce our css
```js
npm install -D tailwindcss@latest postcss@latest autoprefixer@latest
```
In the next.js pages/\_app.js , we need to import basic tailwindcss library.
```js
// pages/_app.js
- import '../styles/globals.css'
+ import 'tailwindcss/tailwind.css'
function MyApp({ Component, pageProps }) {
return <Component {...pageProps} />
}
export default MyApp
```
Additionally, for make this project more sexy i add daisyUI library.
```js
npm i --save daisyui
```
After all our above works done.We just need to modify the tailwindcss configuration file which is in the root document of this project.
```js
/** @type {import('tailwindcss').Config} */
module.exports = {
purge: ["./pages/**/*.{js,ts,jsx,tsx}", "./components/**/*.{js,ts,jsx,tsx}"],
content: [],
theme: {
extend: {},
},
plugins: [require("daisyui")], //this is what we added.
};
```
## 4.rainbowkit configuration
We have to make some modifications to the \_app.js file.First and foremost import dependency package like css、provider、chains.
And then we get a client abstract.Below is all codes.
```js
import {
WagmiConfig,
createClient,
configureChains,
defaultChains,
} from "wagmi";
import { publicProvider } from "wagmi/providers/public";
//rainbow kit UI framework.
import "@rainbow-me/rainbowkit/styles.css";
import { getDefaultWallets, RainbowKitProvider } from "@rainbow-me/rainbowkit";
const { chains, provider } = configureChains(defaultChains, [publicProvider()]);
const { connectors } = getDefaultWallets({
appName: "My RainbowKit App",
chains,
});
const client = createClient({
autoConnect: true,
connectors,
provider,
});
```
There are some optional parameters we could pass into client abstract.
- autoConnect
- connectors
- providers
- storage
- webSocetProvider
To get something detail document referer their official website:https://wagmi.sh/
After we get a client , we pass client and chains into MyApp function.
```
function MyApp({ Component, pageProps }) {
return (
<WagmiConfig client={client}>
<RainbowKitProvider chains={chains}>
<Component {...pageProps} />
</RainbowKitProvider>
</WagmiConfig>
)
}
```
That's all we need to configure before we actually use it in our project.
Basically it's quite simple to create a button like connect our wallet.
Let's say we create an button in our header components.
Here goes the code.
It's simple right ?
```
import {
useConnectModal,
useAccountModal,
useChainModal,
} from '@rainbow-me/rainbowkit';
import { useAccount } from 'wagmi'
const { openConnectModal } = useConnectModal();
const { openAccountModal } = useAccountModal();
const { openChainModal } = useChainModal();
```
In react.js after we define the click event function, we attach the function to the onClick property.
That's all we need to do?
If you get any problem feel free to connect me!
```html
{isConnected ?
(<><button class="btn btn-sm btn-outline btn-primary ml-3 normal-case" onClick={openAccountModal}>Profile</button><button class="btn btn-sm btn-outline btn-primary ml-3 normal-case" onClick={openChainModal}>Chain</button></>)
:
(<button class="btn btn-sm btn-outline btn-primary ml-3 normal-case" onClick={openConnectModal}>connect wallet</button>)
}
```
## 5.show you my code
github : https://github.com/coffiasd/demo-wagmi-rainbowkit
demo : https://demo-wagmi-rainbowkit.vercel.app/
| coffiasd | |
1,186,993 | Let this script type instead of you when you record a video of your browser ⌨ | I was working on cool gifs for Tolgee's new readme, where we show how simply you can modify a... | 0 | 2022-09-07T12:29:37 | https://dev.to/tolgee_i18n/let-this-script-type-instead-of-you-when-you-record-a-video-of-your-browser-25il | javascript, html, webdev, react | I was working on cool gifs for Tolgee's [new readme](https://github.com/tolgee/tolgee-platform/blob/main/README.md), where we show how simply you can modify a localization string in your App. But when I need to type something into input, I always make like 5 mistakes and my typing is not very nice to look at. So I created a script that types the string instead of me when I hit a specific shortcut. Now I use this to record videos, where I have to fill in some input, and I don't want to record it like 1000 times because I was typing inconsistently.
It can be useful for some pranks if you extend this. But let's leave this to your imagination.
## How does it work?
So how does it work? It takes the array of strings. When I hit `ctrl + h` for the first time, it iterates over characters in the first string and modifies the value of the currently focused input by adding the current character. It waits 30ms between every character addition. When I hit the shortcut a second time, it does the same thing with the second string.
## React issue
When you just modify the value property of an input element, React doesn't trigger your onChange listener and so the value is not changed, since React state is not updated. To trigger the onChange event listener you have to also dispatch `input` and do some other stuff you can read
about [here (https://stackoverflow.com/questions/30683628/react-js-setting-value-of-input/52486921#52486921).
## The script
The script (I call it typescript 😄):
```js
(() => {
const strs = ["This is text...", "This is another text."];
let hits = 0;
window.onkeyup = async (event) => {
if (event.ctrlKey && event.key === "h") {
const input = document.activeElement;
for (const char of strs[hits]) {
if (input.value !== undefined) {
setNativeValue(input, input.value + char)
}
await new Promise((resolve) => setTimeout(resolve, 30))
}
hits++;
}
}
// Boring part starts here
// It makes it working with react.js apps
function setNativeValue(element, value) {
let lastValue = element.value;
element.value = value;
let event = new Event("input", {target: element, bubbles: true});
// React 15
event.simulated = true;
// React 16
let tracker = element._valueTracker;
if (tracker) {
tracker.setValue(lastValue);
}
element.dispatchEvent(event);
}
})()
```
## How to use it
1. Copy the script and modify the strings to be typed
2. Copy the modified script
3. Go to the console of developer tools in your browser and paste the script
4. Hit enter
5. Focus an input and hit `ctrl + h`
6. Your text is being typed

---
Tolgee is an open-source solution for software localization i18n. Translate your applications and save up to 90% of time needed for localization! [Tolgee.io](https://tolgee.io)
Star our GitHub repository [tolgee/tolgee-platform (https://github.com/tolgee/tolgee-platform) ⭐⭐⭐!
 | jancizmar |
1,187,082 | How to manage dependencies between Gradle modules? | Using Gradle version catalog for easier dependencies management in a multimodule project. | 0 | 2022-09-07T15:31:45 | https://dev.to/aldok/how-to-manage-dependencies-between-gradle-modules-4jih | gradle, kotlin, android, java | ---
title: How to manage dependencies between Gradle modules?
published: true
description: Using Gradle version catalog for easier dependencies management in a multimodule project.
tags: #gradle #kotlin #android #java
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wauzftb0iwln4sqkld1b.jpg
---
In a multimodule project, managing dependencies manually can be challenging. For example, if you forget to update a library version after modifying a library version on another module, your project will have a duplicate library.
Starting from [Gradle 7.4.1](https://docs.gradle.org/7.4.1/release-notes.html), version catalog is the recommended way of managing dependencies between Gradle projects (also known as a module).
To use the version catalog, simply add `libs.versions.toml` file inside gradle folder (`yourproject/gradle/libs.versions.toml`)
```
.
├── app
│ ├── build.gradle.kts
│ ├── proguard-rules.pro
│ └── src
├── gradle
│ ├── libs.versions.toml
│ └── wrapper
│ ├── gradle-wrapper.jar
│ └── gradle-wrapper.properties
├── build.gradle.kts
├── gradle.properties
├── gradlew
├── gradlew.bat
├── local.properties
└── settings.gradle.kts
```
Inside the `libs.versions.toml` file, you can add the dependencies of your projects.
```toml
[versions]
compose = "1.2.1"
[libraries]
compose-foundation = { module = "androidx.compose.foundation:foundation", version.ref = "compose" }
compose-material = { module = "androidx.compose.material:material", version.ref = "compose" }
compose-tooling = { module = "androidx.compose.ui:ui-tooling", version.ref = "compose" }
compose-ui = { module = "androidx.compose.ui:ui", version.ref = "compose" }
[bundles]
compose = ["compose-foundation", "compose-material", "compose-tooling", "compose-ui"]
```
That's it! The dependencies are available across your Gradle projects.
Here is how you use the dependencies in your project's `build.gradle.kts`.
```kotlin
dependencies {
implementation(libs.compose.foundation)
implementation(libs.compose.material)
implementation(libs.compose.tooling)
implementation(libs.compose.ui)
}
```
Note that Gradle converts the dash (`-`) separator to dot (`.`). From `compose-foundation` to `compose.foundation`.
Other benefits of using version catalogs are:
- Centralized version for some libraries.
By using `version.ref`, we can assign the same compose version for each library. So, you just need to update a library version in one place.
- Grouping dependencies.
Oftentimes, we have many dependencies that should be declared together. We can make the dependencies declaration shorter by using `bundle`, like this.
```kotlin
dependencies {
implementation(libs.bundles.compose)
}
```
## Summary
By using the version catalog, we can manage dependencies easier. This can be useful for a multimodule project, which is common in a mid-sized or large-sized software project.
Heads over to [the official Gradle documentation](https://docs.gradle.org/current/userguide/platforms.htm) for more detail.
Note:
- Gradle introduces version catalog since version [7.0](https://docs.gradle.org/7.0/release-notes.html), but it's still marked as an experimental feature.
- Photo taken by [redcharlie](https://unsplash.com/photos/Y--zr3CPaPs) | aldok |
1,187,594 | Recommendation system #4 - Algorithm and final results | In the previous post we looked closer at the algorithm and connection between Redis and Node. Today,... | 19,699 | 2022-09-08T04:28:40 | https://dev.to/meatboy/writing-recommendation-system-3-algorithm-and-final-results-5c70 | redishackathon, typescript, node, showdev | In the previous [post](https://dev.to/meatboy/writing-recommendation-system-2-redis-node-architecture-kje) we looked closer at the algorithm and connection between Redis and Node. Today, quickly I summarize how the recommendation is made.
A single recommendation is an overlap between item tags and actor tag-event interactions.
For example items:
Titanic has tags: drama, tragedy, history, action.
Merlin has tags: adventure, fantasy, history, wizards.
Harry Potter has tags: adventure, fantasy, action, wizards.
Actor - user - previously could interact with some films:
Liked a film with tags: action, fantasy
Add to favourite film with tags: action, fantasy
Liked a film with tags: history, action, comedy
Liked a film with tags: adventure, fantasy, dragons
Liked a film with tags: wizards, history, action
Add to favourite film with tags: drama, wizards, fantasy
Now let's say, each like is a weight of 1 and each favourite is a weight of 3. The Sum of overlap (with or without duplication, with or without clamp to 1) divided by the maximum value per film is our overlap. And the higher it is, the stronger recommendation is.

Example with duplication
To make sure tags are still relevant, they have an expiration time. So as the interest may change, they expire and will be replaced with new tags. In this approach is also worth to notice the more keywords items have, the more suggestion is precise.
### Demo of film recommendations with 10,000 videos
{% youtube https://www.youtube.com/watch?v=_m1BandnVsQ %}
And the source code:
{% github pilotpirxie/recommendation %}
That was the last post about my Redis Hackathon journey. It was fun to participate and learn more about the technical aspects of Redis. Also, the hackathon allowed me to stay focused and finish another side project! What a wonderful event! ;) | meatboy |
1,187,988 | Career Meetups That Will Blow Your Mind | In the times of such uncertainty and job hopping on the job market you have to be sure you make every... | 0 | 2022-09-08T13:16:15 | https://blog.meetupfeed.io/career-meetups-august-2022/ | career, programming, tutorial, beginners | In the times of such uncertainty and job hopping on the job market you have to be sure you make every effort to stand out of the crowd. That’s why we’ve brought you the finest [career tech talks](https://blog.meetupfeed.io/career-meetups-august-2022/) in August that will instantly level up your game. Enjoy!
[Growing Together: The Impact of Community on Career | Sumayyah Ahmed & Devanshi Modha & Dinorah Tovar](https://meetupfeed.io/talk/growing-together-the-impact-of-community-on-career)
Join us to hear from industry leaders on how contributing to and being visible in the developer community can have a meaningful impact on your career. Supporting your community can take many forms from acting as an organizer, conference speaker, developer mentor or more.
[5 Steps to High-Paying Job Offers You Deserve | Katie McIntyre](https://meetupfeed.io/talk/career-series-workshop-5-steps-to-high-paying-job-offers-you-deserve)
In this workshop you’ll learn the exact 5 step strategy that 575+ students have used to land dream jobs with companies they admire. So you no longer have to play the apply-and-wait game, waste countless hours applying to jobs, get ghosted by recruiters, or fail to land competitive job offers.
[Seven Habits of Highly Successful IT Leaders | Indira Munjuluri](https://meetupfeed.io/talk/leadership-series-seven-habits-of-highly-successful-it-leaders-connect-1)
What makes us a successful IT leader? We bring out all those seven habits that Indira thinks make a difference. And we also look at some other general questions in terms of how do we navigate, learn from her career and success story.
[You Don’t Have to Be a Manager | Stacy Devino](https://meetupfeed.io/talk/you-dont-have-to-be-a-manager)
You are a fabulous Senior Engineer and you love what you do, but you feel stagnated. We have all heard it before: “Why don’t you become a manager?” Moving up the corporate ladder doesn’t mean that you have to move out of a technical role. Advancing past “senior engineer” doesn’t lead you to a crossroad — but to finding yourself in the middle of possibilities. There are many roles in tech that are still “technical” available, and this session will go over some of those options and what is involved in those roles. You will also hear the speaker’s first-hand experience of how you can make those roles your own and how they can differ significantly from one employer to the next.
[Document Yourself: A Framework for Career Advancement | Michelle Brenner](https://meetupfeed.io/talk/document-yourself-a-framework-for-career-advancement-michelle-brenner-r-edeploy-2019)
The goal of this workshop is to document yourself the way you would document code. You wouldn’t expect someone who wants to use the program you built to read every line of code. Instead, they’re relying on the design documents and doc strings to know how it works. The same is true with your career. This workshop is about making it easy for you to provide overwhelming evidence of your value to the company. When you can show your ROI, it’s much easier to secure that promotion, raise or new job that you deserve.
| meetupfeedio |
1,188,012 | Do you need global state? | It's no secret that global state is an absolute nightmare - and not just in React. In this post I'm... | 0 | 2022-09-08T14:15:11 | https://dev.to/mitchelmore/do-you-need-global-state-8i1 | react, javascript | It's no secret that global state is an absolute nightmare - and not just in React. In this post I'm going to show you how to use the Context API and possibly save you the headache of setting up Redux.
## The Context API
React's Context API is often overlooked by newer developers because it's not talked about as it probably should be. In short, it provides utilities for nested access to a value.
Have you ever had some state and passed it down through several layers of components? This is called **Prop Drilling** and it feels awkward for good reason - it's inefficient and really difficult to refactor! Context solves this problem.
### Setting up Context
Imagine you're building a dashboard page to display information about a user - you could have several components that all need to consume that data:
Create a file called `UserContext.ts`:
```ts
import { createContext } from 'React'
const UserContext = createContext<User | null>(null)
export default UserContext
```
### Providing Context to a component tree
Use the _Context Provider_ in the top level of the dashboard:
```tsx
import UserContext from './UserContext'
function Dashboard() {
const user: User = {
name: "Tom",
email: "hello@mitchelmore.dev"
}
return (
<UserContext.Provider value={user}>
...
</UserContext.Provider>
)
}
export default Dashboard
```
### Consuming Context
Accessing our context is as simple as passing it to one of React's built-in hooks:
```tsx
import { useContext } from 'react'
import UserContext from './UserContext'
function DisplayUser() {
const user = useContext(UserContext)
if (user === null) throw new Error("...")
return (<div>{user.name}</div>)
}
```
It really is that simple - and it's something I discovered far too late into my journey with React.
## When not to use Context
While Context is great for eliminating prop drilling - there are definitely scenarios where I'd be reaching for Redux:
### Complex data
If your data is a giant nested object, then a global state library is something I'd definitely recommend. Redux, for example, will help you split it into manageable "slices" and also let you define actions to get and mutate data at your will.
### Server-synchronised data
A much more sustainable approach to web app development is storing all data server-side and having the frontend reflect it. This completely eliminates global state on the frontend, and prevents issues with saving data as the backend is the single source of truth. [Tanstack Query](https://tanstack.com/query/) (formerly React Query) is by far the best and most popular solution - I'd highly recommend it!
## Conclusion
To conclude, the purpose of this article is to help you understand how understanding React and its ecosystem can help you make architectural decisions. Far too often people spend hours setting up a library like Redux (increasing their bundle size!) just for functionality that could've been perfectly achieved with context.
| mitchelmore |
1,188,071 | Hacking Javascript Objects - I | Most of us deal with Objects pretty much every day and it's one of the most commonly used data... | 0 | 2022-09-08T17:17:08 | https://aakansha.dev/hacking-javascript-objects-i | javascript, beginners, frontend, webdev | ---
title: Hacking Javascript Objects - I
published: true
date: 2022-07-18 17:07:04 UTC
tags: #Javascript, #Beginner, #frontend, #webdev
canonical_url: https://aakansha.dev/hacking-javascript-objects-i
---
Most of us deal with `Objects` pretty much every day and it's one of the most commonly used data structures. But few of us might know that it's also possible to control the behaviour of the properties in an object or you can say "Hack" the properties of Objects 😂.
In this post, we will be diving into some of the internals of the Objects that can help in achieving the same and we will also have a fun [Quiz](#quiz-time) towards the end of the post 😉.
Before we dive into the internals of `object`, let's start with some basics.
## What are objects?
`Objects` are a collection of properties where each property has a `key` and a `value`.
Objects can be created in three forms👇🏻
1. The `literal` notation with `curly braces {}`. This is the most commonly used syntax.
2. Using the `Object` constructor. `Object` class represents one of the data structures in `Javascript`. There would be hardly any need to use this form when creating objects but still good to know 🙂
3. We will not be going into details about the third form but will share it later in this post 🙂
## Customising the behaviour of properties in an object
So it's fairly easy to get started with objects, but here comes the interesting fact, it's also possible to customise the behaviour of the properties in an object.
Let's see what all customisations are possible 👀.
`Object` class has a `static` method `defineProperty` which allows to `add` a new property or `update` an existing property of an `object` and also control the behaviour of the property. The `static` properties are directly accessed using the `class`, instances need not be created to access the `static` properties.
## Object.defineProperty
#### Usage
```js
Object.defineProperty(obj, property, descriptor)
```
- `obj`: The `object` whose properties need to be updated.
- `property`: Name or [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol) of the property which needs to be added/updated. Since [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol) is not covered in this post so we will be using `Name`.
- `descriptor`: An `object` denoting the `attributes` for the property. This is going to help us in controlling the behaviour of a property in an object 😉.
Going further we will be referring to these attributes as `descriptors`.
There are two types of `descriptors`
1. [`Data Descriptors`](#data-descriptors) - The property descriptor has a `value` and may or may not be modifiable.
2. [`Accessor Descriptors`](#accessor-descriptors) - The property descriptor has `get` and `set` methods with which the value of the property can be retrieved and updated respectively.
### Data descriptors
| `Name` | `Type` | `Default` | `Description` |
| ---- | ---- | ---- | ---- |
| [`value`](#value) | Any valid [javascript type](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#data_and_structure_types) | `undefined` | This denotes the value of the `property`. Defaults to `undefined`. |
| [`writeable`](#writeable) | `boolean` | `true` | Implies whether the `value` of the `property` can be updated with an `assignment operator(=)`. |
#### value
This denotes the value of the property. Defaults to `undefined`.
```js
var myObj = Object.defineProperty({}, 'id', { value: 1 }); // value is passed
console.log(myObj); // { id: 1 }
myObj = Object.defineProperty({}, 'id', {}); // value not passed
console.log(myObj); // {id: undefined}
```
#### writeable
Implies whether the `value` of the `property` can be updated with an `assignment operator(=)`. Defaults to `true`.
```js
// writeable will be set to true since not passed
var myObj = Object.defineProperty({}, 'id', { value: 1 });
myObj.id = 10;
myObj.id; // 10
// writeable is false
myObj = Object.defineProperty({}, 'id', { value: 1, writeable: false });
myObj.id = 10;
myObj.id; // 1, as the value couldn't be updated
```
### Accessor descriptors
| `Name` | `Type` | `Default` | `Description` |
| ---- | ---- | ---- | ---- |
| [`get`](#get) | `function` | `undefined` | A `function` which returns the `value` of the property. This `function` gets called when the property is accessed. |
| [`set`](#set) | `function` | `undefined` | A `function` which sets the `value` of the `property`. This function gets called when the property value is set using `assignment operator(=)`. |
#### get
A `function` returns the `value` of the property. This `function` gets called when the property is accessed using `dot(.)` or `square brackets([])`.
```js
var myObj = Object.defineProperty({}, 'id', {get: () => 10 });
myObj.id // 10
```
#### set
A `function` that sets the `value` of the property. This function gets called when the property value is set using `assignment operator(=)`.
```js
var myObj = Object.defineProperty({}, "id", { set: (val) => (id = val) });
myObj.id = 20;
myObj.id = 20;
myObj = Object.defineProperty({}, "id", {
set: (val) => (id = val),
get: () => id,
});
myObj.id = 20;
myObj.id; // 20
```
*Note: An object can have either [`Data Descriptors`](#data-descriptors) or [`Accessor Descriptors`](#accessor-descriptors) but not both.*
```js
myObj = Object.defineProperty({}, 'id', { value: 10, set:() => id = 10 });
// Throws error as we are using both Data and Accessor descriptors
// value is a Data descriptor whereas set is an accessor descriptor
```
### Additional attributes for descriptors
Apart from `data descriptors` and `accessor descriptors`, there are additional attributes common to both the descriptors👇🏻 that can be used to control the behavior as well.
| Name | Type | Default | Description |
| ---- | ---- | ---- | ---- |
| [`configurable`](#configurable) | `boolean` | `false` | Implies whether the property can be deleted from the object. |
| [`enumerable`](#enumerable) | `boolean` | `false` | Implies whether the property will show during `enumeration` of the keys of the object. |
| |
#### configurable
Implies whether the property can be deleted from the object. Defaults to `false`.
```js
var myObj = Object.defineProperty({}, 'id', { value: 10 });
delete myObj.id
myObj // { id: 10 } as its not configurable
myObj = Object.defineProperty({}, 'id', { value: 10, configurable: true });
delete myObj.id
myObj // {} as its configurable
```
#### enumerable
Implies whether the property will show during `enumeration` of the keys of the object eg when using [`Object.keys`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys) or [`for...in`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in). Defaults to `false`.
```js
var myObj = Object.defineProperty({}, 'id', { value: 10 });
Object.keys(myObj) // [] as the key "id" is not enumerable
myObj = Object.defineProperty({}, 'id', { value: 10, enumerable: true });
Object.keys(myObj) // ["id"] as the key "id" is enumerable
```
For updating `multiple properties`, we can use `Object.defineProperties`.
```js
Object.defineProperties(obj, {
property1: descriptor,
property2: descriptor
});
```
As mentioned earlier, there is a third way to create an object as well and that is [`Object.create`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/create) which helps to create an object with the specified prototype object and descriptors.
Now since you know about the `descriptors` with the help of which the behavior of the properties can be controlled, can you guess why 👇🏻 is possible?
```js
var myObj = { id : 10 };
myObj.id; // 10
myObj.id = 100;
myObj.id; // 100
Object.keys(id); // ["id"];
delete myObj.id
myObj; // {}
```
This means when the `objects` are created using `literal notation` / `Object()` constructor or even a new property is added via `assignment operator(.)`, the descriptors are set to 👇🏻
| Name | Value |
| --- | --- |
| `value` | The value which is set when creating / updating the object. |
| `writeable` | `true`, hence update is possible |
| `enumerable` | `true`, hence the keys are enumerable |
| `configurable` | `true`, hence `delete` works |
## Retrieve descriptors of an existing Object?
Using `descriptors` we have a lot more control, but there should be some way to retrieve the descriptors of properties in an existing object as well. Well yes, there is a static method `getOwnPropertyDescriptor` in the `Object` class which helps in achieving the same 😍.
## Object.getOwnPropertyDescriptor
#### Usage
```js
Object.getOwnPropertyDescriptor(obj, property)
```
- `obj`: The `object` whose property descriptors need to be retrieved.
- `property`: Name or [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol) of the property whose descriptors needs to be retrieved. Since [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol) is not covered in this post so we will be using `Name`.
```js
var myObj = { id: 10 };
Object.getOwnPropertyDescriptor(myObj, 'id')// {value: 10, writable: true, enumerable: true, configurable: true}
```
To retrieve `descriptors` of all `properties` of an object we can use `Object.getOwnPropertyDescriptors`
```js
Object.getOwnPropertyDescriptors(obj)
```
## Quiz time
It's time to have some fun quiz now 😉. Take the quiz [here](https://forms.gle/nG7ZXfbGyvPthRNZ7) . Good luck!
## Closing Thoughts
If you have used some of the methods like [`Object.freeze`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze), [`Object.sealed`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/seal), now you know what it does behind the scenes 🙂, it modifies the `descriptors` for the properties of the object.
Most of the time these utilities do help but there might be cases where you want more control over the properties and that's where you will need to use it. Hope you enjoyed the post 😍. | aakansha1216 |
1,188,389 | Download Button: Interaction Animation Effects | Demo: If you don't feel like reading the text, see:... | 0 | 2022-09-08T22:42:41 | https://dev.to/mohammadsahragard/download-button-animation-3085 | javascript, css, tutorial, beginners | ### Demo:

###### If you don't feel like reading the text, see: https://youtu.be/GqeuTyft0kE
**Description:**
This is a cool effect and animation for download buttons. Useful and without empty lines, let's count, there are 60 lines. I hope you like it ❤️✌🏻.
### Technologies Used:
- HTML
- CSS
- Javascript
Now, let's go.
1. At the earliest step, write the HTML codes:
```HTML
<div class="container">
<button class="button">
<div class="progress"></div>
<span class="value">Download</span>
</button>
</div>
```
`<progress>`, is the element that has a linear gradient background.
2. Ok, let's write the CSS codes: (part 1)
```CSS
.container {
height: 100vh;
display: flex;
justify-content: center;
align-items: center;
background-color: #23232f;
}
.button {
all: unset;
height: 50px;
width: 120px;
text-align: center;
background-color: dodgerblue;
color: #fff;
border-radius: 5px;
outline: 2px solid royalblue;
outline-offset: 5px;
transition: 0.4s;
cursor: pointer;
position: relative;
overflow: hidden;
}
.button:active {
transform: scale(0.9);
}
```
Now, Part 2:
```CSS
.progress {
position: absolute;
inset: -20px 0 0 0;
background-image: linear-gradient(to top, royalblue, deeppink);
clip-path: polygon(0 0, 50% 20%, 100% 0, 100% 100%, 0 100%);
}
.value {
position: relative;
}
.button.start-download {
width: 50px;
border-radius: 50%;
}
```
`.start-download` when clicked on button element, is added.
3. Well, now it's time for JavaScript:
```javascript
const $ = document;
const query = queryItem => $.querySelector (queryItem);
// ------- data
const button = query ('.button');
const progress = query ('.progress');
const value = query ('.value');
let percent = 0;
// ------- function's
const startDownload = () => {
const intervalItem = setInterval (() => {
button.removeEventListener ('click', startDownload);
percent++;
button.classList.add ('start-download');
progress.style.inset = `${percent}% 0 0 0`;
value.innerHTML = `${percent}%`;
if (percent === 100) {
clearInterval(intervalItem);
percent = 0;
button.classList.remove('start-download');
progress.style.inset = '-20px 0 0 0';
value.innerHTML = 'Download';
button.addEventListener('click', startDownload);
}
}, 30);
}
// ------ event's
button.addEventListener('click', startDownload);
```
Done! That's it, I'd like to know your opinion 🙋🏻❤️🔥.
.
.
### More Tutorials:
- [Responsive Profile Page Using HTML and CSS](https://t.co/aMYyLNZsdZ)
.
### Follow Me In:
- [YouTube](https://www.youtube.com/user/https://www.youtube.com/user/https://www.youtube.com/channel/UCmlrWX41qiXEv6QNMzivxIw)
- [Instagram](https://instagram.com/MohammadSahragard)
- [GitHub](https://github.com/MohammadSahragrad)
- [Twitter](https://twitter.com/MammadSahragard)
- [LinkedIn](https://linkedin.com/in/MohammadSahragard) | mohammadsahragard |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.