id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,869,028 | React state is being shared across iframes? | We have a next.js app that we normally run by embedding it in an iframe on a third-party website. It... | 0 | 2024-05-29T13:30:44 | https://dev.to/ldorigo/state-is-being-shared-across-iframes-56a9 | webdev, nextjs, react, javascript | We have a next.js app that we normally run by embedding it in an iframe on a third-party website. It uses react and react-query for state management. For a specific usecase, we wanted to run two instances of our app on the same website, and noticed very weird behavior: the state of the two apps is getting jumbled together, and state changes in one iframe *sometimes* (not always) result in a state change in the other iframe. I wish I could give more details on when/how it happens, but I'm pretty confused by the issue and haven't been able to create a minimal reproducible example. Hopefully someone has an idea to help me troubleshoot? | ldorigo |
1,869,026 | Auth Service with JWT token and mail module, Part 2 | Love to work with you, You can hire me on Upwork. We have looked into email and auth service now is... | 27,543 | 2024-05-29T13:26:50 | https://dev.to/depak379mandal/auth-service-with-jwt-token-and-mail-module-part-2-1i0f | Love to work with you, You can hire me on [Upwork](https://www.upwork.com/freelancers/~011463d96b5b87d7ff).
We have looked into email and auth service now is the time to dive into mail and user module. Let me give you what is structure of mail module we have
```bash
src/modules/mail
├── mail.interface.ts
├── mailerConfig.service.ts
└── templates
└── auth
└── registration.hbs
```
`mail.interface.ts` is a simple file that does include required interface for mail in system. Mail module does contain our email templates in templates folder, and we got mailerConfig.service.ts for mail module to be used as config factory. Let me just paste them one by one.
```typescript
// src/modules/mail/mail.interface.ts
export interface MailData<T = never> {
to: string;
data: T;
}
```
```html
// src/modules/mail/templates/auth/registration.hbs
<html lang='en'>
<head>
<meta charset='UTF-8' />
<meta name='viewport' content='width=device-width, initial-scale=' />
<title>{{title}}</title>
</head>
<body style='margin:0;font-family:arial'>
<table style='border:0;width:100%'>
<tr style='background:#eeeeee'>
<td
style='padding:20px;color:#808080;text-align:center;font-size:40px;font-weight:600'
>
{{app_name}}
</td>
</tr>
<tr>
<td style='padding:20px;color:#808080;font-size:16px;font-weight:100'>
Thank You for registration, Please verify to activate account.<br />
</td>
</tr>
<tr>
<td style='text-align:center'>
<a
href='{{url}}'
style='display:inline-block;padding:20px;background:#00838f;text-decoration:none;color:#ffffff'
>{{actionTitle}}</a>
</td>
</tr>
</table>
</body>
</html>
```
```typescript
// src/modules/mail/mailConfig.service.ts
import { MailerOptions, MailerOptionsFactory } from '@nestjs-modules/mailer';
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { HandlebarsAdapter } from '@nestjs-modules/mailer/dist/adapters/handlebars.adapter';
import * as path from 'path';
@Injectable()
export class MailerConfigClass implements MailerOptionsFactory {
constructor(private configService: ConfigService) {}
createMailerOptions(): MailerOptions {
return {
transport: {
host: this.configService.get('mail.host'),
port: this.configService.get('mail.port'),
ignoreTLS: this.configService.get('mail.ignoreTLS'),
secure: this.configService.get('mail.secure'),
requireTLS: this.configService.get('mail.requireTLS'),
auth: {
user: this.configService.get('mail.user'),
pass: this.configService.get('mail.password'),
},
},
defaults: {
from: `"${this.configService.get(
'mail.defaultName',
)}" <${this.configService.get('mail.defaultEmail')}>`,
},
template: {
dir: path.join(
this.configService.get('app.workingDirectory'),
'src',
'modules',
'mail',
'templates',
),
adapter: new HandlebarsAdapter(),
options: {
strict: true,
},
},
};
}
}
```
In above, we are using handlebars as our templating engine that get included with our installed module for mailer. Above does our all setup so we can use mailer config and use the MailerService everywhere as our MailerModule is global module and exports MailerService, So it is available everywhere as ConfigService. Now we can move towards AppModule where we are importing AuthModule, MailerModule and UserModule. We are going to discuss more in UserModule.
```typescript
// src/modules/app/app.module.ts
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { configLoads } from '../config';
import { TypeOrmModule } from '@nestjs/typeorm';
import { TypeORMConfigFactory } from '../database/typeorm.factory';
import { AuthModule } from '../auth/auth.module';
import { UserModule } from '../user/user.module';
import { MailerModule } from '@nestjs-modules/mailer';
import { MailerConfigClass } from '../mail/mailerConfig.service';
const modules = [AuthModule, UserModule];
export const global_modules = [
...
MailerModule.forRootAsync({
useClass: MailerConfigClass,
}),
];
@Module({
imports: [...global_modules, ...modules],
})
export class AppModule {}
```
In `src/modules/config/app.config.ts` the file, we have included new variable
```bash
frontendDomain: process.env.FRONTEND_DOMAIN,
```
same goes for src/modules/config/mail.config.ts we have added new config variables
```typescript
// src/modules/config/mail.config.ts
import { registerAs } from '@nestjs/config';
export default registerAs('mail', () => ({
port: parseInt(process.env.MAIL_PORT, 10),
host: process.env.MAIL_HOST,
user: process.env.MAIL_USER,
password: process.env.MAIL_PASSWORD,
defaultEmail: process.env.MAIL_DEFAULT_EMAIL,
defaultName: process.env.MAIL_DEFAULT_NAME,
ignoreTLS: process.env.MAIL_IGNORE_TLS === 'true',
secure: process.env.MAIL_SECURE === 'true',
requireTLS: process.env.MAIL_REQUIRE_TLS === 'true',
}));
```
We are now ready to move into UserModule, we have seen all the essential parts that needed. Just last part to complete all those APIs that will start working.
```typescript
// src/modules/user/services/token.service.ts
import { Injectable } from '@nestjs/common';
import { InjectRepository } from '@nestjs/typeorm';
import { User } from 'src/entities/user.entity';
import { Token, TokenType } from 'src/entities/user_token.entity';
import { Repository } from 'typeorm';
@Injectable()
export class TokenService {
constructor(
@InjectRepository(Token)
private readonly tokenRepository: Repository<Token>,
) {}
async create(
user: User,
type: keyof typeof TokenType = 'REGISTER_VERIFY',
expires_at: Date = new Date(Date.now() + 1000 * 60 * 60 * 24 * 7),
) {
const token = Token.create({
user_id: user.id,
type: TokenType[type],
expires_at,
});
return this.tokenRepository.save(token);
}
async verify(token: string, type: keyof typeof TokenType) {
const tokenEntity = await this.tokenRepository.findOne({
relations: ['user'],
loadEagerRelations: true,
where: { token, type: TokenType[type], is_used: false },
});
if (!tokenEntity) {
throw new Error('Token not found');
}
if (tokenEntity.expires_at < new Date()) {
throw new Error('Token expired');
}
tokenEntity.is_used = true;
await tokenEntity.save();
return tokenEntity.user;
}
}
```
Very explanatory in terms of program, one function to create token and another to verify. The token expires in a week, But you can see that expires_at is very dynamic and can be changed after passed as third argument. Now, time for UserService that does nothing but creates a user.
```typescript
// src/modules/user/services/user.service.ts
import { Injectable } from '@nestjs/common';
import { InjectRepository } from '@nestjs/typeorm';
import { Repository } from 'typeorm';
import { User } from 'src/entities/user.entity';
import { RegisterDto } from 'src/modules/auth/email.dto';
@Injectable()
export class UserService {
constructor(
@InjectRepository(User)
private userRepository: Repository<User>,
) {}
async create(userCreateDto: RegisterDto) {
const user = User.create({ ...userCreateDto });
return this.userRepository.save(user);
}
}
```
At the end, we just import them as providers and exports them to be used in other modules.
```typescript
// src/modules/user/user.module.ts
import { Module } from '@nestjs/common';
import { UserService } from './services/user.service';
import { TokenService } from './services/token.service';
import { Token } from 'src/entities/user_token.entity';
import { User } from 'src/entities/user.entity';
import { TypeOrmModule } from '@nestjs/typeorm';
@Module({
imports: [TypeOrmModule.forFeature([User, Token])],
providers: [UserService, TokenService],
exports: [UserService, TokenService],
})
export class UserModule {}
```
Now we can open our terminal and run the npm run start:dev command and see our output. But even before everything, we need a logger that logs our request and response. To do that, we have morgan we can install it using npm i morgan and types by npm i -D @types/morgan. That can be included in createApplication in bootstrap file.
```typescript
// src/utils/bootstrap.ts
import * as morgan from 'morgan';
export const createApplication = (app: INestApplication) => {
...
app.use(morgan('dev'));
return app;
};
```
Now it will give us more insight on certain API calls. So after running npm run start:dev. We get something like below that loads all the modules with mapped routes.
```bash
[3:14:58 AM] File change detected. Starting incremental compilation...
[3:14:58 AM] Found 0 errors. Watching for file changes.
Debugger attached.
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [NestFactory] Starting Nest application...
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] AppModule dependencies initialized +27ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] TypeOrmModule dependencies initialized +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] MailerModule dependencies initialized +1ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] ConfigHostModule dependencies initialized +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] ConfigModule dependencies initialized +11ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] JwtModule dependencies initialized +5ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] MailerCoreModule dependencies initialized +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] TypeOrmCoreModule dependencies initialized +71ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] TypeOrmModule dependencies initialized +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] TypeOrmModule dependencies initialized +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] UserModule dependencies initialized +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [InstanceLoader] AuthModule dependencies initialized +1ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [RoutesResolver] EmailController {/auth/email} (version: 1): +14ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [RouterExplorer] Mapped {/auth/email/register, POST} (version: 1) route +2ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [RouterExplorer] Mapped {/auth/email/verify, POST} (version: 1) route +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [RouterExplorer] Mapped {/auth/email/login, POST} (version: 1) route +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [RouterExplorer] Mapped {/auth/email/send-verify-email, POST} (version: 1) route +1ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [RouterExplorer] Mapped {/auth/email/reset-password-request, POST} (version: 1) route +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [RouterExplorer] Mapped {/auth/email/reset-password, POST} (version: 1) route +0ms
[Nest] 78996 - 04/09/2024, 3:15:01 AM LOG [NestApplication] Nest application successfully started +3ms
```
And above that we can test the routes, I am using mailtrap.io as mail testing platform. You can just go with anything you like. Update .env with below provided sample
```ini
NODE_ENV=development
APP_PORT=8000
APP_NAME="NestJS Series"
FRONTEND_DOMAIN=https://example.com
# Database Configuration
DATABASE_URL=postgresql://danimai:Danimai@localhost:5432/nest-series
# Mail confgiuration
MAIL_HOST=
MAIL_PORT=2525
MAIL_USER=
MAIL_PASSWORD=
MAIL_IGNORE_TLS=true
MAIL_SECURE=false
MAIL_REQUIRE_TLS=false
MAIL_DEFAULT_EMAIL=noreply@example.com
MAIL_DEFAULT_NAME=Danimai
# JWT
AUTH_JWT_SECRET=random-gibberish-token-for-jwt
AUTH_JWT_TOKEN_EXPIRES_IN=90d
```
So I have completed my testing and works very fine for my side. If having any issues on your side let me know in further, We also have comment section to discuss and get into more depth. In the next tutorial, we will be learning how to update User profile and Authorisation guard.
Thank you very much for reading, See you in the next article. | depak379mandal | |
1,868,993 | Auth Service with JWT token and mail module, Part 1 | Love to work with you, You can hire me on Upwork. Now we have to go with big changes and setup, We... | 27,543 | 2024-05-29T13:18:47 | https://dev.to/depak379mandal/auth-service-with-jwt-token-and-mail-module-part-1-eh4 | Love to work with you, You can hire me on [Upwork](https://www.upwork.com/freelancers/~011463d96b5b87d7ff).
Now we have to go with big changes and setup, We need more of base for basic libraries like mail, JWT and bcrypt. So let me describe more on the whole scenario with flowchart and some textual context. We are going to add more complex ideas in future like refresh and access token with more complex media upload.

Above is a very clear sequential diagram to understand the entire architecture that we are going to build to solve the Authentication flow. We are also going to include login API that is not included right now in the Diagram. So let me start with something we call Service, So I am just attaching an email service that will help you to understand how we are going to solve the service. And its work distribution over multiple modules like mail, auth, and user module. Each of the modules have its own functionality that they will be fulfilling, and we are going to arrange all the required functions according to them. About DTOs and Controllers, we have discussed them in previous article.
I am including below the EmailService but there is also AuthService that actually holds the core parts of Authentication service like creating JWT token and sending required email. We will discuss on AuthService at the end of the `EmailService`.
```typescript
// src/modules/auth/services/email.service.ts
import {
Injectable,
NotFoundException,
UnprocessableEntityException,
} from '@nestjs/common';
import {
EmailVerifyDto,
LoginDto,
RegisterDto,
ResetPasswordDto,
SendVerifyMailDto,
} from '../email.dto';
import { UserService } from '../../user/services/user.service';
import { TokenService } from '../../user/services/token.service';
import { InjectRepository } from '@nestjs/typeorm';
import { User } from 'src/entities/user.entity';
import { Repository } from 'typeorm';
import { AuthService } from './auth.service';
@Injectable()
export class EmailService {
constructor(
private authService: AuthService,
private userService: UserService,
private tokenService: TokenService,
@InjectRepository(User) private userRepository: Repository<User>,
) {}
async register(registerDto: RegisterDto) {
const user = await this.userService.create(registerDto);
const token = await this.tokenService.create(user, 'REGISTER_VERIFY');
await this.authService.userRegisterEmail({
to: user.email,
data: {
hash: token.token,
},
});
}
async verify(verifyDto: EmailVerifyDto) {
try {
const user = await this.tokenService.verify(
verifyDto.verify_token,
'REGISTER_VERIFY',
);
user.email_verified_at = new Date();
user.is_active = true;
await user.save();
} catch (e) {
throw new UnprocessableEntityException({ verify_token: e.message });
}
}
async login(loginDto: LoginDto) {
const user = await this.userRepository.findOne({
where: { email: loginDto.email.toLowerCase() },
});
if (!user) {
throw new UnprocessableEntityException({ email: 'User not found' });
}
if (!user.is_active) {
throw new UnprocessableEntityException({ email: 'User not active' });
}
if (!user.email_verified_at) {
throw new UnprocessableEntityException({ email: 'User not verified' });
}
if (!user.comparePassword(loginDto.password)) {
throw new UnprocessableEntityException({
password: 'Password is incorrect',
});
}
const auth_token = this.authService.createJwtToken(user);
return { auth_token };
}
async sendVerifyMail(sendVerifyMailDto: SendVerifyMailDto) {
const user = await this.userRepository.findOne({
where: { email: sendVerifyMailDto.email.toLowerCase() },
});
if (!user) {
throw new NotFoundException({ email: 'User not found' });
}
if (user.email_verified_at) {
throw new UnprocessableEntityException({
email: 'User already verified',
});
}
const token = await this.tokenService.create(user, 'REGISTER_VERIFY');
await this.authService.userRegisterEmail({
to: user.email,
data: {
hash: token.token,
},
});
}
async sendForgotMail(sendForgotMailDto: SendVerifyMailDto) {
const user = await this.userRepository.findOne({
where: { email: sendForgotMailDto.email.toLowerCase() },
});
if (!user) {
throw new UnprocessableEntityException({ email: 'User not found' });
}
if (!user.email_verified_at) {
throw new UnprocessableEntityException({
email: 'Please verify email first.',
});
}
const token = await this.tokenService.create(user, 'RESET_PASSWORD');
await this.authService.forgotPasswordEmail({
to: user.email,
data: {
hash: token.token,
},
});
}
async resetPassword(resetPasswordDto: ResetPasswordDto) {
try {
const user = await this.tokenService.verify(
resetPasswordDto.reset_token,
'RESET_PASSWORD',
);
user.password = resetPasswordDto.password;
await user.save();
} catch (e) {
throw new UnprocessableEntityException({ reset_token: e.message });
}
}
}
```
That is too much to even understand, So I will be breaking functions one by one for you guys, and we will understand how strings are attached and works. It is going to be a long ride, that is good as no system does not have small components. We need to just understand structure/architecture/organization of all multiple small components that make whole big system.
## Register User
```typescript
// src/modules/auth/services/email.service.ts
async register(registerDto: RegisterDto) {
const user = await this.userService.create(registerDto);
const token = await this.tokenService.create(user, 'REGISTER_VERIFY');
await this.authService.userRegisterEmail({
to: user.email,
data: {
hash: token.token,
},
});
}
```
In above whatever we do of anything related to user we do in userService like create user, updating user Or anything related to user. And we know userService belongs to UserModule So we can use that from UserModule as we don’t want over-distribution of something that particularly belongs to some module. NestJS provide us a way to exports the providers and by importing the module other module any module can use injected dependency without mentioned in providers list as it is directly exported from imported module. So in above module UserServiceand TokenService is coming from UserModule. register function does create user and token, then sends email with token for required verify process in Authentication.
## Verify User
```typescript
// src/modules/auth/services/email.service.ts
async verify(verifyDto: EmailVerifyDto) {
try {
const user = await this.tokenService.verify(
verifyDto.verify_token,
'REGISTER_VERIFY',
);
user.email_verified_at = new Date();
user.is_active = true;
await user.save();
} catch (e) {
throw new UnprocessableEntityException({ verify_token: e.message });
}
}
```
Above is very basic idea what happens on verification it tries to verify the token by helper function this.tokenService.verify that we will look into later and if the token is valid it updates the email_verified_at field with current date and also sets is_active to true. So any user can log in to their account and use APIs in further.
## Resend Verify Mail (If user is not verified)
```typescript
// src/modules/auth/services/email.service.ts
async sendVerifyMail(sendVerifyMailDto: SendVerifyMailDto) {
const user = await this.userRepository.findOne({
where: { email: sendVerifyMailDto.email.toLowerCase() },
});
if (!user) {
throw new NotFoundException({ email: 'User not found' });
}
if (user.email_verified_at) {
throw new UnprocessableEntityException({
email: 'User already verified',
});
}
const token = await this.tokenService.create(user, 'REGISTER_VERIFY');
await this.authService.userRegisterEmail({
to: user.email,
data: {
hash: token.token,
},
});
}
```
If user is not verified and unable to verify in particular, expire time. They can resend verify email again from this endpoint. We have injected userRepository to find user if they exist in our DB, or they are already verified, If they are verified we will not re-send verify email. If they are not verified, we will send another email as per request from user.
## Send Forgot Email
```typescript
// src/modules/auth/services/email.service.ts
async sendForgotMail(sendForgotMailDto: SendVerifyMailDto) {
const user = await this.userRepository.findOne({
where: { email: sendForgotMailDto.email.toLowerCase() },
});
if (!user) {
throw new UnprocessableEntityException({ email: 'User not found' });
}
if (!user.email_verified_at) {
throw new UnprocessableEntityException({
email: 'Please verify email first.',
});
}
const token = await this.tokenService.create(user, 'RESET_PASSWORD');
await this.authService.forgotPasswordEmail({
to: user.email,
data: {
hash: token.token,
},
});
}
```
This function in service is to provide functionality to send forgot password email, So whenever user requests for forgot password, we can just use this particular function. As same above, we fetch the user details if they exist, we proceed ahead. If they are not verified, we don’t proceed, as they need to first verify the user. Verification is not necessary, it is really up to you if you wanted to include it or not. In ahead we create a token, So user are tracked, then we send password to user.
## Reset Password
```typescript
// src/modules/auth/services/email.service.ts
async resetPassword(resetPasswordDto: ResetPasswordDto) {
try {
const user = await this.tokenService.verify(
resetPasswordDto.reset_token,
'RESET_PASSWORD',
);
user.password = resetPasswordDto.password;
await user.save();
} catch (e) {
throw new UnprocessableEntityException({ reset_token: e.message });
}
}
```
Above is steps to verify user, we verify the use tokenService that verifies and returns the connected user. After that, we proceed with saving the password in DB, but you will notice we are just adding it to DB directly without encrypting. That part actually handled by a hook we implement in User entity.
```typescript
// src/entities/user.entity.ts
import {
AfterLoad,
BeforeInsert,
BeforeUpdate,
Column,
Entity,
OneToMany,
} from 'typeorm';
import { ApiHideProperty, ApiProperty } from '@nestjs/swagger';
import { BaseEntity } from './base';
import { Token } from './user_token.entity';
import * as bcrypt from 'bcryptjs';
@Entity({ name: 'users' })
export class User extends BaseEntity {
...
@ApiHideProperty()
previousPassword: string;
@AfterLoad()
storePasswordInCache() {
this.previousPassword = this.password;
}
@BeforeInsert()
@BeforeUpdate()
async setPassword() {
if (this.previousPassword !== this.password && this.password) {
const salt = await bcrypt.genSalt();
this.password = await bcrypt.hash(this.password, salt);
}
this.email = this.email.toLowerCase();
}
comparePassword(password: string) {
return bcrypt.compareSync(password, this.password);
}
}
```
Above, we are using listeners and subscribers (hooks). We always create a cache saved as previousPassword, So whenever we update a new password, we can use the previous password from that property to compare and encrypt using bcryptjs. The naming of hooks is very clear, `@AfterLoad` works after fetching the data from DB, `@BeforeInsert` and `@BeforeUpdate` are respectively called after insertion and update of data in DB. We also require a library bcryptjs to be installed.
```bash
npm i bcryptjs
npm i -D @types/bcryptjs
```
## Login User
```typescript
// src/modules/auth/services/email.service.ts
async login(loginDto: LoginDto) {
const user = await this.userRepository.findOne({
where: { email: loginDto.email.toLowerCase() },
});
if (!user) {
throw new UnprocessableEntityException({ email: 'User not found' });
}
if (!user.is_active) {
throw new UnprocessableEntityException({ email: 'User not active' });
}
if (!user.email_verified_at) {
throw new UnprocessableEntityException({ email: 'User not verified' });
}
if (!user.comparePassword(loginDto.password)) {
throw new UnprocessableEntityException({
password: 'Password is incorrect',
});
}
const auth_token = this.authService.createJwtToken(user);
return { auth_token };
}
```
At last, we are a place where we can discuss, How user actually logged in. In log in DTO we get email and password, taking both of them we check if user actually exist or not then comparePassword If correct then we create token from authService.
Now is the time to reveal the core of services that we have as AuthService and MailerService.
Auth Service
We need libraries like jwt and nodemailer that will help us in tokenization and mailing respectively. To install them use below command
```bash
npm i @nestjs/jwt @nestjs-modules/mailer
```
Now we can define Auth Service that holds our JWT token creation and sending mail using nodemailer. That is very simple functions we have implemented, So you can take a look at it. It needs multipart to be in places like MailModule as a global module. Also needs MailData an interface that helps to define common data to be passed in functions of email.
```typescript
// src/modules/auth/services/auth.service.ts
import { Injectable } from '@nestjs/common';
import { User } from 'src/entities/user.entity';
import { JwtService } from '@nestjs/jwt';
import { MailerService } from '@nestjs-modules/mailer';
import { ConfigService } from '@nestjs/config';
import { MailData } from 'src/modules/mail/mail.interface';
@Injectable()
export class AuthService {
constructor(
private jwtService: JwtService,
private mailerService: MailerService,
private configService: ConfigService,
) {}
createJwtToken(user: User) {
return this.jwtService.sign({
id: user.id,
timestamp: Date.now(),
});
}
async userRegisterEmail(
mailData: MailData<{
hash: string;
}>,
) {
await this.mailerService.sendMail({
to: mailData.to,
subject: 'Thank You For Registration, Verify Your Account.',
text: `${this.configService.get(
'app.frontendDomain',
)}/auth/verify?token=${mailData.data.hash}`,
template: 'auth/registration',
context: {
url: `${this.configService.get(
'app.frontendDomain',
)}/auth/verify?token=${mailData.data.hash}`,
app_name: this.configService.get('app.name'),
title: 'Thank You For Registration, Verify Your Account.',
actionTitle: 'Verify Your Account',
},
});
}
async forgotPasswordEmail(
mailData: MailData<{
hash: string;
}>,
) {
await this.mailerService.sendMail({
to: mailData.to,
subject: 'Here is your Link for Reset Password.',
text: `${this.configService.get(
'app.frontendDomain',
)}/auth/reset-password?token=${mailData.data.hash}`,
template: 'auth/registration',
context: {
url: `${this.configService.get(
'app.frontendDomain',
)}/auth/reset-password?token=${mailData.data.hash}`,
app_name: this.configService.get('app.name'),
title: 'Here is your Link for Reset Password.',
actionTitle: 'Reset Password',
},
});
}
}
```
For JWT, we need only need to register it as module in AuthModule. To use any of the Entity in services, we always need it to be injected from module. Otherwise, it will raise concern, that we don’t have any service or injectable available in opposite of mentioned one. That is why to use userRepository we are just importing the `TypeOrmModule.forFeature([User])` including the required entity to be injected as repository.
```typescript
// src/modules/auth/auth.module.ts
import { Module } from '@nestjs/common';
import { EmailController } from './email.controller';
import { EmailService } from './services/email.service';
import { AuthService } from './services/auth.service';
import { UserModule } from '../user/user.module';
import { TypeOrmModule } from '@nestjs/typeorm';
import { User } from 'src/entities/user.entity';
import { JwtModule } from '@nestjs/jwt';
import { ConfigService } from '@nestjs/config';
@Module({
imports: [
UserModule,
TypeOrmModule.forFeature([User]),
JwtModule.registerAsync({
inject: [ConfigService],
useFactory: async (configService: ConfigService) => ({
secret: configService.get('auth.secret'),
signOptions: { expiresIn: configService.get('auth.expires') },
}),
}),
],
controllers: [EmailController],
providers: [EmailService, AuthService],
})
export class AuthModule {}
```
To define basic options for JWT, we provide it directly in imported modules as follows, Also registerAsync to use any other module as that imported module depends on. It dynamically gets instantiated after dependent module creation. So we have provided basic options like secret and expiration time.
```typescript
// src/modules/auth/auth.module.ts
JwtModule.registerAsync({
inject: [ConfigService],
useFactory: async (configService: ConfigService) => ({
secret: configService.get('auth.secret'),
signOptions: { expiresIn: configService.get('auth.expires') },
}),
}),
```
We need to discuss more on Mail Module and User module that are used in above code snippets. But this article is getting bigger that expected. So let us move the parts in next article. Thank you very much for reading, check my profile in further to get into other articles of this series.
See you in the next. | depak379mandal | |
1,869,025 | A Single Killer Feature You Need For Collaborative Database Development: Source Control For MySQL | Database development is a complex process that requires quality planning, technical skills,... | 0 | 2024-05-29T13:23:19 | https://dev.to/dbajamey/a-single-killer-feature-you-need-for-collaborative-database-development-source-control-for-mysql-41n3 | mysql, mariadb, database | Database development is a complex process that requires quality planning, technical skills, experience, and collaboration. Regardless of the project scope and the development team size, efficient partnership can help you improve, accelerate, and simplify the process at all stages.
Two factors are crucial for everyone who works with databases daily and strives to deliver high-quality products: implementing best practices into daily routines and choosing the right tools.
So, what steps should any tech team take to enhance collaboration and ensure code quality? Let’s examine helpful strategies for seamless and productive teamwork to improve efficiency and faster database development.
https://techbehindit.com/technology/collaborative-database-development-source-control-for-mysql/ | dbajamey |
1,869,024 | Hiring the Right Talent: Tips for Building Your Dev Team | Thus, that’s quite crucial to strengthen your development team since the projects and the company’s... | 0 | 2024-05-29T13:22:58 | https://dev.to/christinek989/hiring-the-right-talent-tips-for-building-your-dev-team-jp8 | mobile, development, developmentteam, outsourcing | Thus, that’s quite crucial to strengthen your development team since the projects and the company’s performance in the tech industry usually depend much on it. [IT Staffing](https://www.addevice.io/blog/navigating-it-staffing) is an important procedural activity that should be implemented in order to get the right personnel, talent, and expertise for creating innovative solutions within a limited span of time at an optimum cost. In this article we’ll review some key factors related to how one can find and attract the right talent for developing their product.
### Tips When Hiring the Right Development Talent
#### Identifying and Sourcing Top Talent
**Defining Your Development Needs**
Nonetheless, planning an effective development process requires one to have a clear understanding before going out to identify talent. Get started by outlining the precise functions that need to be filled in your team. These might include:
- Front-end developers
- Back-end developers
- Full-stack developers
- DevOps engineers
Every position entails special abilities and proficiency, so learning these factors will enable you to effectively advertise and filter the candidates.
Describing the jobs in detail is the next important step when cross training is being implemented. A well-crafted job description should outline:A well-crafted job description should outline:
- Responsibilities of the role
- Required skills and qualifications
- Technologies and programming languages used can be of uttermost importance.
- Company culture and values
This should involve stating in detail the technologies and tools your team works with, which will entice candidates who have a background in those areas. For instance, if you are looking for a Front-end developer, specify the fact that the person should have experience working with frameworks that include the likes of React or Angular.
**Effective Sourcing Strategies**
After attaining the suitable job descriptions the next step is to source for the talent which involves the following; The use of job boards and online communities is a best practice Employers must use social media to identify the right people to reach out to and grab the attention of these potential candidates. To find suitable talents for application development, the following are preferred sources; Here are some sources that are useful for searching skilled developers: Some of these platforms do not just enable you to advertise your openings but also to look for candidates for such positions.
Another important function that cannot be overlooked is that social media are also efficient instruments for sourcing talent. Given instances include the Linked In and twitter that are not only considered as social networks but rather professional networks. Attend conferences and fairs or other forums which are known to be an excellent way of addressing tech-related people and convince them to apply for the positions you are offering.
**Effective sourcing strategies include:**
- Writing to well-received recruitment sites (e. g. Stack Overflow, GitHub, Linkedin).
- Interacting with tech forums active on social media
- Making use of Social networks in relation to creating professional connections
- This is the reason for working with professional recruitment companies of technical staff
This is especially true if you are in need of candidates for premium positions as engaging the services of the right tech recruitment agencies can get the job done in rather short order. They offer wanted candidates in massive pre-screened databases and can therefore assist in the hiring steps.
## How to Have Successful Interviews and Onboarding
#### Interview Process Best Practices
In an interview, you are not just trying to convince the candidate to work for you, but you also get an excellent chance to determine the candidate’s skills and whether they will fit your company’s culture or not. This practice is very important in order to assess the students’ performance when it comes to preparing technical assessments. Among the effective approaches, one can distinguish the grouping of coding challenges, project-related assignments, and problem-solving activities. Make sure these assessments correspond to the responsibilities a candidate can face in the workplace.
Besides technical competencies, it is equally critical to evaluate behavioral and & cultural match. Degree Does behavioral interview and also try to know different ways in solving a problem, how to work in a team and how to communicate with the team members. One should ask questions that expose their past, detailing the experiences that they had and the approaches that they took in handling certain scenarios. It will also help in preventing outsiders from being a nuisance within your company, as they will have been already vetted to fit into the laid down corporate culture.
**Interview process best practices:**
- Include the necessary technical descriptions and practical works (e. g. coding tests problems, project-based activities)
- This approach argues that it is possible to test behaviorial and cultural fit through certain questions
- This test will reflect the candidate’s problem solving ability, teamwork and communication skills.
#### Competitive Salary Offers and Employee Orientation
When you have finally found the right candidate for the job, it is important to countercheck and match their skills and experience accordingly. Go through the market research available in the industry to ensure you provide the best salary and remunerations to the employees. Just do not forget that your prime candidates may be deciding between several options, thus positioning your offer appropriately is crucial. Such arrangements should entitle employees to perks such as health care, work from home provisions, and career advancement.
**Components of a competitive offer:**
- Competitive salary
- Various comprehensive workplace benefits that include health coverage, flexibility to work from home, etc.
- Opportunities for professional development
Onboarding is the process of initial training and integration of newly joining employees as per the company’s standards. Onboarding is the process of settling and orienting a new employee in a new workplace, and developing an organizational onboarding plan would help reduce the difficulties. Some of the activities which should be observed in this plan include introductions of the technical team members, introduction of the company’s policies and procedures, orientation of the technical team, and other initial activities that the technical team is involved in.
However, there is still a need to understand the individual needs of employees and possible ways of improvement, such as: Providing frequent and effective mentorship as well as training for employees to foster sustainable growth of the top talent. Mentor new employees with seasoned colleagues, so that the new hires receive specific advice from their first assignments and the right approach to the new environment. Providing them with learning proposals and supporting their career advancement can improve their performances, as well as their satisfaction and workplace commitment.
**Onboarding essentials:**
- Structured onboarding plan
- Introductions to team members
- Overview of company policies
- Initial training sessions
Training and mentoring remain other critical measures that need to be implemented into healthcare organizations for effective provision of services to patients.
### Conclusion
[Recruiting the best development team ](https://www.addevice.io/blog/remote-development-team)is one of the most crucial factors a business needs to put into consideration. Designing the development needs, [outsourcing](https://www.addevice.io/blog/it-software-outsourcing-trends) candidate sources, conducting two-faced interviews, and performing an effective onboarding process should be used to attract and maintain the talent. Talent acquisition goes beyond merely hiring employees for various roles because such acquisition focuses on contributing to the achievement of organizational goals and participating in the advancement of the firm in the long run.
Choosing the right developers to work in your business is one of the singular most important actions towards your business goals. Remember these guidelines, to help you establish a team of competent, as well as harmonizing co-workers that would represent your company’s outlook. Of course, having strong and motivated staff will ensure their projects are successful at the end of the day, and the company will keep on expanding and progressing in this highly competitive world of technology.
That way, you are on course to assemble a solid development team, one that can handle any problem that comes its way and fuel the expansion of your business. Recalling, it is essential to point that the primary factor of success is the proper choice of personnel and the following provision of the necessary conditions that will enable these people to work effectively. | christinek989 |
1,869,023 | DumpsBoss's AZ500 Dumps Success Blueprint: Your Key to Triumph | Empower Yourself with DumpsBoss Today Don't let the AZ-500 exam intimidate you. With DumpsBoss by... | 0 | 2024-05-29T13:22:55 | https://dev.to/michael546/dumpsbosss-az500-dumps-success-blueprint-your-key-to-triumph-4429 | Empower Yourself with DumpsBoss Today
Don't let the AZ-500 exam intimidate you. With DumpsBoss by your side, you have everything you need to conquer the challenge and emerge victorious. Take the first step towards realizing your career aspirations in Azure security by choosing DumpsBoss as your trusted partner in success. Visit our website now to explore our range of study materials and unlock a world of opportunities in cloud security. Your journey to becoming a certified Azure Security Engineer starts here, with DumpsBoss.
Elevate Your Azure Security Expertise with DumpsBoss: The Definitive AZ-500 Study Guide PDF and Dumps PDF
As technology continues to advance, the importance of securing cloud environments becomes paramount. Microsoft Azure, one of the leading cloud platforms, offers the AZ-500: Microsoft Azure Security Technologies certification to validate professionals' skills in implementing security controls, maintaining security posture, and identifying and remediating vulnerabilities. To assist you in mastering the <a href="https://dumpsboss.com/microsoft-exam/az-500/">AZ500 Dumps</a>, DumpsBoss provides unparalleled resources tailored to your success.
Unraveling the AZ-500 Dumps PDF: Your Key to Exam Success
Navigating through the intricacies of the AZ-500 exam requires comprehensive preparation, and DumpsBoss is your trusted companion on this journey. Our AZ-500 Dumps PDF is meticulously crafted to encompass the breadth and depth of topics covered in the exam. Here's why our Dumps PDF stands out:
• Authentic Exam Simulation: Gain confidence by practicing with real exam questions meticulously curated by industry experts. Our Dumps PDF mirrors the format and difficulty level of the actual AZ-500 exam, ensuring you're well-prepared for the challenges ahead.
• Detailed Explanations: Understand the rationale behind each answer with our detailed explanations. Grasp the underlying concepts and principles, empowering you to tackle even the most complex scenarios with ease.
• Continuous Updates: Stay ahead of the curve with our regularly updated Dumps PDF. We keep pace with the latest developments in Azure security technologies, ensuring that our materials align with the most current exam objectives.
Access High-Quality Dumps >>>>>: https://dumpsboss.com/microsoft-exam/az-500/ | michael546 | |
1,869,022 | Art Expo Taiwan | https://www.expostandzone.com/trade-shows/art-taipei Art Taipei is also an important platform for... | 0 | 2024-05-29T13:22:28 | https://dev.to/expostandzoness/art-expo-taiwan-hf2 | https://www.expostandzone.com/trade-shows/art-taipei
Art Taipei is also an important platform for international art exchange in the Asia-Pacific region. It is going to be held in Taipei World Trade Center aipei, Taiwan.
| expostandzoness | |
1,868,982 | Politikalar - Masaüstünde Kısayol Oluşturma | Politikayı Oluşturma ve Ayarlama Domain eklentimizde Nesne Ekle kısmına tıklayıp açılan... | 0 | 2024-05-29T13:22:11 | https://dev.to/dogacakinci/politikalar-masaustunde-kisayol-olusturma-3p4c | # Politikayı Oluşturma ve Ayarlama
Domain eklentimizde Nesne Ekle kısmına tıklayıp açılan arayüzden nesnemizin tipini seçip nesnemize bir ad veriyoruz ve ekle butonuna tıklıyoruz.


Daha sonra Obje Türlerinden Politikaları seçtiğimizde eklediğimiz politikayı görebiliriz.

Eklediğimiz politikanın üstüne tıkladıktan sonra açılan arayüzden "Kullanıcı" kısmını seçip "Kısayol" ekleyeceğimiz için gerekli bilgileri girmemiz gerekiyor. "Kısayol Konumu" na kısayolumuzun nerede olmasını istiyorsak o yolu giriyoruz, "Çalışacak Komut" kısmına kısayolun çalıştıracağı komutu giriyoruz ve "Başlatıcı İkonu" kısmında da kısayolumuzun ikonunu ekleyip kaydet simgesine tıklayarak politikamızı kaydediyoruz.

# Politika Çalışması ve Kontrolü
İstemci tarafında
```
gpupdate -v
```
komutunu çalıştırdığımızda "Politika Sonucu: Başarılı" mesajını görüyorsak politikamızın başarılı bir şekilde uygulandığını ve kısayolumuzun belirttiğimiz konuma geldiğini görürüz.


| dogacakinci | |
1,868,992 | Why You Should End Your Source Files With a New Line | In software development, seemingly minor details can have a significant impact on the efficiency,... | 0 | 2024-05-29T13:17:57 | https://dev.to/documendous/why-you-should-end-your-source-files-with-a-new-line-156g | bestpractice, sourcecode, vscode, codereview | In software development, seemingly minor details can have a significant impact on the efficiency, maintainability, and compatibility of your code. One such detail is the practice of ending your source files with a blank line.
While it might seem trivial at first glance, this simple convention plays a crucial role in adhering to industry standards, preventing unnecessary version control conflicts, and ensuring smooth operation across various development tools and environments.
In this article, I would like to discuss the reasons behind this best practice and shed some light on how a single newline character at the end of your files can contribute to a more robust and harmonious coding experience with your IDE, repository and working with other developers.
Having a blank line at the end of your source files is considered good practice for several reasons:
**POSIX Compliance:** The POSIX standard requires that a text file end with a newline character. Many tools and utilities expect this convention and can produce errors or warnings if the final newline is missing.
**Version Control Systems:** When using version control systems like Git, having a newline at the end of files can prevent unnecessary diffs. If two versions of a file have or don't have a newline at the end, the version control system might treat this as a meaningful difference, leading to avoidable conflicts.
**Text Editors:** Some text editors and IDEs automatically add a newline at the end of a file. If your file doesn't have a newline, the editor might add it, causing an unnecessary change.
**Code Linters and Formatters:** Many code linters and formatters enforce this rule as part of their style guides. Adhering to this convention can help maintain consistency across a codebase.
**File Concatenation:** When concatenating files, not having a newline at the end of a file can lead to issues where the last line of one file and the first line of the next file are joined together.
Let's go into these reasons in more detail:
**POSIX Compliance**
The POSIX standard defines a text file as a sequence of lines, each ending with a newline character. This means that a properly formatted text file must end with a newline. Many Unix-based tools and utilities, which follow POSIX standards, expect this newline character at the end of a file. If it's missing, these tools may not behave as expected, potentially causing errors or misinterpretations of the file's contents.
**Version Control Systems**
In version control systems like Git, each line in a file is tracked, and differences between versions are highlighted. If a file ends without a newline, adding one in a later commit might be seen as a change to the last line of the file. This can create unnecessary diffs and complicate the process of reviewing changes. Ensuring a newline at the end of each file avoids such trivial differences, leading to cleaner and more meaningful version histories.
**Text Editors**
Many text editors and integrated development environments (IDEs) automatically append a newline character to the end of a file when saving. If your file doesn't already end with a newline, the editor's automatic addition can create an unintended change. This can be particularly problematic in collaborative environments where different team members use different editors. Consistently ending files with a newline helps avoid these automatic and unintended modifications.
**Code Linters and Formatters**
Code linters and formatters often enforce a set of style guidelines to maintain consistency and readability in a codebase. One common rule is to ensure that files end with a newline. This consistency helps avoid small stylistic differences that can clutter code reviews and lead to merge conflicts. By adhering to this rule, you ensure that your codebase remains clean and maintainable.
**File Concatenation**
When concatenating multiple files together, having a newline at the end of each file ensures that the content remains properly separated. Without a newline, the last line of one file and the first line of the next file can be merged into a single line, causing syntax errors or other issues. For example, in languages where line breaks are significant (like Python), this can lead to broken code. Ensuring each file ends with a newline prevents such concatenation issues and maintains the integrity of the combined content.
**Summary**
**POSIX Compliance:** Ensures compatibility with Unix-based tools.
**Version Control Systems:** Avoids unnecessary diffs and conflicts.
**Text Editors:** Prevents unintended changes due to automatic newline additions.
**Code Linters and Formatters:** Maintains consistency and readability.
**File Concatenation:** Ensures proper separation of content when files are combined.
Adhering to the practice of ending files with a newline character helps create a smoother, more predictable development workflow and avoids a range of potential issues that can arise in collaborative and automated environments.
Be sure to add any comments to add any insight on this subject! It's also appreciated if you have different views on it as well. | documendous |
1,868,991 | StickAI | These is the sample | 0 | 2024-05-29T13:17:42 | https://dev.to/sudhan23082004/stickai-247g | These is the sample | sudhan23082004 | |
1,868,962 | Two Days Indie Dev Life: Mailchimp, Webflow & Zapier - A Love Story | My new landing page is done, I got my email form with download link working and I found a cool new... | 0 | 2024-05-29T13:16:43 | https://dev.to/devlifeofbrian/two-days-indie-dev-life-mailchimp-webflow-zapier-a-love-story-8h1 | buildinpublic, flutter, programming, marketing | [My new landing page](http://floatnote.com/) is done, I got my email form with download link working and I found a cool new tool for measurements. I also share some nice Webflow templating insights and how I made a very big mistake 😁. Let’s get into it.
# Website & Leads
Two days ago I worked the entire day on the new homepage / landing page of the website. This is after I realised that many people reach [my app](https://www.floatnote.com/) for the first time via desktop web. Currently, my download links are set up to show [the web app](https://floatnote.app/) if they are visited on a desktop. I prefer them to leave their email so they can download the mobile app themselves later, as the mobile app currently has a much better user experience.

The web version works fine too, don't get me wrong. But some of the UI elements are not yet tuned for the web, which makes some things look out of proportion in certain places. No problem for functionality, but it does look a bit sloppy here and there.

I originally tried to make a download page (in addition to my new landing page) in Mailchimp, but I don't like the templates and the results I get with it. *Note from the future, I was using the old builder so that might have to do with it. Either way I decided to go with Webflow*. I initially only wanted to create a download page, but then I thought, while I'm at it, I might as well do the whole homepage/landingpage, because I now have a very cool new slogan that I want to reflect. This will improve my SEO and also make sure the website is up to date with all new features. So two days ago, after a good back workout, I went to my favorite place and started working.

# Webflow Template
After about two or three hours I had to switch locations to regain my focus. I went to one of my new favorite spots in a park.

Once there I went to look for templates on Webflow. I quickly managed to find a good one that I wanted to use and started implementing it. A lesson I learned from implementing this template and a few templates in webflow before is the following.
## Stick To Your Concept
Originally [Float Note](https://www.floatnote.com/) started out as a desktop application, but once the mobile version got developed mobile became the main app, web was set up and desktop got discontinued. Simply because of time issues, I might continue it one day.

You can save yourself a lot of time by choosing a template that really represents the concept you are selling. In my case now, a mobile app template is better than a desktop SaaS template.

This has to do with the different types of visual elements that are all tailored to mobile app screenshots. To save yourself a lot of time, it is useful if you download these screenshots, adjust them and use the same size (this is important) and then upload them again.

This will keep many animations working properly and most importantly keep proportions looking good. If you don't do this you often spend a lot of time keeping your template nice and responsive because you disturb it because of the different sizes of your screenshots.
# Twitter Game
I’ve been active on twitter for about a week now. It’s still kind of new to me but something really cool happened yesterday. [DEV.TO](https://dev.to/) put one of my daily blogs in one of their tweets, they have like 300k+ followers, I couldn’t believe it. Very very cool, thanks a lot 🙏.

And while we’re at it I updated [my twitter](https://x.com/devlifeofbrian) header yesterday. It was still blank and of course I thought this might be the perfect opportunity for me to promote my app. So I made a header, which at first looked great on my laptop but got all messed up on mobile. So I figured out a way to fix this, assuming that most mobiles will have the same kind of dimensions more or less.

I took a screenshot of my phone and enlarged it on top the design in Canva. This way I could see how my big design would turn out on the big screen and on mobile. By playing around with the opacity of the screenshot. It looks a bit messy but it worked! The challenge was keeping the big design look good. It’s easy to work around the island and battery level etc. But it’s a bit harder to still make it look good when those things are not there. I used some stars and planets to fill up the space where I felt it was too empty.

I’m pretty happy with the result for now. I shouldn’t spent too long on these kind of things, I have a tendency to play around with pixels for hours. Luckily I’m aware of this.
# Tuesday Trip Day
Per usual I went to Eindhoven tuesday. After a go(o)d workout I moved myself to work at one of my clients.

There wasn’t much to do for me so I continued with setting up the landing page. My goal for today is clear. I want a page where users can leave their email and once they do, it should add the user to mailchimp and send a download link. The email should look a bit like this:


By the end of the day, I got it working. I spent a lot of time trying to build something decent in Webflow and Mailchimp. These tools are great for many things, but I find it difficult (read, not impossible & more important with my skills) to create anything in my head with them. I'm probably spoiled by Flutter and how easy things go and how my brain thinks that way. I pulled a few hairs out of my head today because of these tools.

## Webflow & PixelSnap2
Above you see the final result of the download page on web. While working on this page I was going back and forth finetuning the design and I remembered there was this tool that could show you dimensions on the screen with the click of a button. I’m a pixel perfect guy by nature and I do a lot of measuring when I develop my apps, but I’m a bit more loose when I do something like this. However, some things were just too obvious. Like the download button would be bigger than form field. That just looked off. It has to do with the elements being drawn with their fonts + padding as starting point, instead of heights. Anyway what I wanted to tell you about is this tool! It’s so cool. It’s from the same company that build [CleanShot X](https://cleanshot.com/), the tool I use to create my screenshots. It’s called [PixelSnap2](https://getpixelsnap.com/).

With the press of a button you can put measurements on the screen and keep them there until you remove them. It’s so good for quick measuring! You press a button, you measure stuff, click a bit here and there if you want to and then go about your business. I can’t wait to use this while developing one of my apps.

# Mailchimp Journey
After I got the web page working in Webflow I had to set up a Journey in mailchimp.
https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8toiew218nsgvfqggxpw.png
One of the more confusing parts of this was setting up the linking between Webflow and Mailchimp. Somehow I thought they would have a native integration with each other, but they don’t. So naturally I asked ChatGPT how to solve this.

One funny thing, I spent a good time on trying to figure out what the `step_id` was. It turns out it was part of the url that I had already. That’s what you get for ADHD scanning the documentation instead of reading it.

Once I set up the journey I used Zapier (for now) to link the two together. I’ve heard about Zapier before, I even played around with it but never had a real use case for it. I will replace it once it becomes too expensive but for now it’s a great solution. It worked great too, I could ask it in natural language what I wanted to do, then their AI set up the initial steps for me. One of the few apps where I see AI implemented in a really good way. Kudos to them.

After setting everything up I tested it and it works! Eureuka.

I did notice just now that the sender looks very techy. I thought I fixed that. I’m putting that in my Float Note inbox for later with [Alfed](https://www.alfredapp.com/) and the query option of Float Note.

# Big Mailchimp Mistake
So before we end this blog let me tell you about this one big mistake I made yesterday which caused me to accidently mail everyone that ever registered for Float Note to get an email. Before I set everything up correctly with Zapier I was on the wrong path. I get the data from the Webflow form but I selected the wrong campaign. So when I pressed the button to text the integration, it got the data from webflow but selected a campaign that sent my download link email to all current subscribers of Float Note. That’s messy! So everyone got an email yesterday with a download link, no context whatsoever. Really bad 😂. But honest mistake, you live you learn. I’m ‘glad’ it was only 200 people. And I see only two people unsubscribed. Stupid mistake, lesson learned.

Also I see a lot bounces, this has to do with DMARC I believe. I haven’t looked into it yet. But I will once I get my mail campaign game going. For now, we have more imortant stuff to attend to.
# Thank you 🙏
Once again, thank you for reading if you came this far. I really appreciate the support I’m getting. It’s taking up a lot of time sometimes but it’s worth it. I’m still finding my way around building the perfect system for this. See you next time 🤙

### About Me
About me: I’m an independent app developer with ADHD and the creator of Float Note, an app that tackles four common ADHD challenges: too many thoughts, trouble organizing, feeling overwhelmed, and staying focused.
### Discount Code
It would mean a lot to me if you gave my app a try and let me know what you think through one of my socials. You can try it for free for 7 days and when the time comes, use code 🎫 “DEVLIFEOFBRIAN" to get 70% off your subscription for life. A gift for only a chosen few that read my blog 🙏. Also, if you really can’t pay for it, but you love the app. Send me a message and we’ll work something out.
### Download
📲 Download it here or click the link in my bio (Android/iOS/web) ➡️ floatnote.com/download
| devlifeofbrian |
1,868,881 | Why Theo is Wrong & We'll Get a Laravel for JavaScript | JavaScript's Need for a Full-stack Framework “Why Don't We Have A Laravel For... | 0 | 2024-05-29T13:15:57 | https://wasp-lang.dev/blog/2024/05/29/why-we-dont-have-laravel-for-javascript-yet | framework, javascript, laravel, wasp | ## JavaScript's Need for a Full-stack Framework
“*Why Don't We Have A Laravel For JavaScript?”.* This is the question [Theo poses in his most recent video](https://www.youtube.com/watch?v=yaodD79Q4iE).
And if you’re not familiar with tools like [Laravel](https://laravel.com/) and [Ruby-on-Rails](https://rubyonrails.org/), they are opinionated full-stack frameworks (for PHP and Ruby) with lots of built-in features that follow established conventions so that developers can write less boilerplate and more business logic, while getting the industry best practices baked into their app.

He answers this question with the opinion that JavaScript *doesn’t need* such frameworks because it’s better to select the tools you want and build the solution you need yourself.
This sounds great — and it also happens to be a nice flex if you’re a seasoned dev — but I feel that he doesn’t back up this claim very well, and **I’m here to tell you where I think he’s wrong**.
In my opinion, the better question to ask is why don’t we have a Laravel for JavaScript *yet*? The answer being that we’re still working on it.
In his summary of the full-stack frameworks of the JavaScript world that could be comparable to Laravel or Rails, he fails to consider a few important points:
1. **People really want a Laravel / Rails for JavaScript**. If they didn’t, there wouldn’t be so many attempts to create one, and he wouldn’t be making a video whose sole purpose is to respond to the pleading cry “_WHY DOESN’T JAVASCRIPT HAVE ITS OWN LARAVEL!?_”
2. **He fails to consider the timing and maturity of the underlying tools within the JS ecosystem**. Perhaps it’s not that a Laravel for JavaScript doesn’t *need* to exist, it’s just that it doesn’t exist yet due to some major differences in the ecosystems themselves, like how old they are and where the innovation is mostly happening.
3. **He also fails to ask for whom these types of solutions are suitable for**. Surely, not all devs have the same objectives, so some might opt for the composable approach while others prefer to reach for a framework.
So let’s take a look at how we got to the point we’re at today, and how we might be able to bring a full-stack framework like Laravel or Rails to the world of JavaScript.
## Getting Shit Done
In his video, Theo brings up the point that "*there's a common saying in the React world now which is that ‘if you're not using a framework you're building one’”.* Even though this is meant to be used as a criticism, Theo feels that most JavaScript devs are missing the point and that building your “own framework” is actually an advantage.

He feels that the modular nature of the JavaScript ecosystem is a huge advantage, but that sounds like a lot of pressure on the average developer to make unnecessary judgement calls and manage lots of boilerplate code.
Sure, you have teams that need to innovate and meet the needs of special use cases. These are the ones that prioritize modularity. They tweak, improve, and squeeze as much out of developer experience (DX) and performance as possible to get their unique job done right.
But on the other hand, there are also numerous teams whose main objective is producing value and innovating on the side of the product they are building, instead of the tools they are using to build it. These devs will favor a framework that allows them to focus solely on the business logic. This gives them a stable way to build stuff with best practices so they can easily advance from one project to another. In this camp are also the lean, mean indiehackers looking for frameworks so they can move fast and get ideas to market!

It’s a bit like the difference between Mac and Linux. Mac’s unified stack that just works out-of-the box means many professionals prefer it for its productivity, whereas Linux is great if you’re looking for flexibility and have the time and knowledge to tweak it to your desires. Both are valid solutions that can coexist to meet different needs.
This focus on productivity is what made Rails so powerful back in the day, and why Laravel is such a loved framework at the moment. And the many attempts at creating such a framework for JavaScript is proof enough that there is a large subset of JavaScript devs who also want such a solution.
But maybe the reason such a framework doesn’t exist yet doesn’t have to do with whether devs want one or not, but rather the important factors which are needed in order for such a framework to come together haven’t aligned up until this point. For such a framework to be widely adoptable, it first needs underlying technologies that are stable enough to build upon. After that, it needs time and many iteration cycles to reach maturity itself, so that devs can feel comfortable adopting it.
Have these factors aligned in the JavaScript world to give us the type of frameworks that PHP and Ruby already have? Maybe not quite yet, but they do seem to be slowly coming together.
## Comparing Ecosystems
One of Theo’s main points is that JavaScript as a language enables a level of modularity and composability that languages like Ruby and PHP don’t, which is why Ruby and PHP ecosystems are well served by full-stack frameworks, but JavaScript *doesn’t need one* since you can just compose stuff on your own.
While JavaScript is a peculiar language, with its support for both functional and imperative paradigms and dynamic nature, it also comes with a lot of pitfalls (although it has improved quite a bit lately), so you don’t typically hear it get praised in the way Theo does here. In fact, you are probably more likely to hear praise for Ruby and its properties as a modular and flexible language.
So if it isn’t some unique properties of JavaScript as a language that make it the king of web dev, what is it then?

Well, the answer is pretty simple: **JavaScript is the language of the browser**.
*Way back* when most of the web development was happening on the server side, PHP, Java, Ruby and other languages where reigning supreme. During this era, devs would only write small pieces of functionality in JavaScript, because most of the work was being handled server-side.
But as web development evolved and we started building richer applications, with more dynamic, responsive, and real-time features, a lot of code moved away from the server and over towards JavaScript on the client, because it’s (basically) the only language that supports this. So instead of doing your development mostly in PHP or Ruby with a little bit of JavaScript sprinkled in there, you were now splitting your apps between substantial amounts of JavaScript on the client, plus Ruby or PHP on the server.
JavaScript’s final power move came with the arrival of NodeJS and the ability to also write it on the server, which secured its position as the king of web dev languages. Today, devs can (and do) write their entire apps in JavaScript. This means you need to know one language less, while you’re also able to share the code between front-end and back-end. This has opened up a way for better integration between front-end and back-end, which has snowballed into the ecosystem we know today.
So it’s not so much the unique properties of JavaScript as a language that have made it the dominant ecosystem for web development, but more its unique monopoly as the only language that can be used to write client code, plus it can also be used server-side.

As Theo says, “_we’ve got infinitely more people making awesome solutions_” in the JavaScript ecosystem. That’s right. It’s exactly those infinite number of developers working in the space creating the flexibility and modular solutions for JavaScript, rather than it being an innate quality of the programming language.
And because the JavaScript ecosystem is still the hottest one around, it has the most devs in total while continuing to attract new ones every day. This means that we get a large, diverse community doing two main things:
1. Innovating
2. Building
The innovators (and influencers) tend to be the loudest, and as a result opinion largely skews in their favor. But there is also a lot of building, or “normal” usage, happening! It’s just that the innovators tend to do the talking on behalf of the builders.
So with all that’s going on in the JavaScript ecosystem, is it pointless to try and build a lasting framework for JavaScript developers, as Theo suggests, or are we on the path towards achieving this goal regardless of what the innovators might claim?
## Show Me What You’re Working With
Theo also drops the names of a bunch of current JavaScript frameworks that have either failed to take off, or “_just can’t seem to get it right_” when it comes to being a comprehensive full-stack solution.

And he does have a point here. So far, solutions like [Blitz](https://blitzjs.com/), [Redwood](https://redwoodjs.com/), [Adonis](https://adonisjs.com/), or [T3](https://create.t3.gg/) haven’t managed to secure the popularity in their ecosystem that Rails or Laravel have in theirs.
**But these things take time.**
Have a look at the graph above. Laravel and Rails have been around for 13-15 years! The JavaScript frameworks being used in comparison are just getting started, with some of them, like [Wasp](https://wasp-lang.dev) and [Redwood](https://redwoodjs.com/), at similar stages in their development as Laravel and Rails were during their initial years.
As you can see, it takes time for good solutions to reach maturity. And even with some of these frameworks starting to stagnate their great initial growth is evidence that demand for these tools definitely exists!
The main overlying issue that tends to plague these tools is that Javascript as an ecosystem is moving quite fast, so for a solution like this to survive long term, it needs to not only be opinionated enough, but also modular enough to keep up with the shifts in the ecosystem.

One factor that prevents frameworks from reaching this state is being tied too tightly to the wrong technology. This was NextJS for BlitzJS, GraphQL for Redwood, and Blaze for MeteorJS. And another factor is *not going big enough* with the framework, because it seems too daunting a task within the JavaScript ecosystem, where things move fast and everyone is “terrified of being opinionated” because they might get criticized by the loudest voices in the scene.
In other words, frameworks that avoid going big on their own, and going *truly* full-stack, like Ruby-on-Rails and Laravel went, miss the opportunity to solve the most common pain-points that continue to plague JavaScript developers.
But, the JavaScript ecosystem is maturing and stabilizing, we are learning from previous attempts, and there *will* be a full-stack framework bold enough to go all the way in, get enough things right, and persist for long enough to secure its place.
## Say Hi to Wasp
In his comparison of JavaScript frameworks on the market today, Theo also fails to mention the full-stack framework for React & NodeJS that we’re currently working on, [Wasp](https://wasp-lang.dev).
We’ve been working hard on [Wasp](https://wasp-lang.dev) to be the *truly* full-stack framework that meets the demands of web developers and fills that void in the JavaScript ecosystem to become the framework they love to use.

**With Wasp, we decided to go big, opinionated, and truly full-stack**. In other words, we’re going *all in* with this framework.
That means thinking from first principles and designing a novel approach that only Wasp uses, like building our own compiler for our configuration language, and truly going full-stack, while also keeping it modular enough to move together with the ecosystem as it progresses.
This means that we spent more time in the beginning trying different approaches and building the foundation, which finally brought us a significant jump in usage starting in late 2023. Wasp is now growing strong, and at a really fast pace!
It’s really cool for us to see Wasp being used today to ship tons of new apps and businesses, and even being used internally by some big names and organizations (more info on that will be officially released soon)!

What Wasp does differently than other full-stack frameworks in the JavaScript world is that it separates it’s main layer of abstraction into its own configuration file, `main.wasp`. This config file gives Wasp the knowledge it needs to take care of a lot of the boilerplatey, infrastructure-focused code, and allows it to have this unique initial compile-time step where it is able to reason about your web app before it generates the code for it in the background (using that knowledge while generating it).
In practice, this means that all you have to do is describe your Wasp app at a high level in Wasp’s config file, and then implement everything else in technologies that you’re familiar with such as React, NodeJS, and Prisma. It also means that Wasp has a high modularity potential, meaning we are building it to also support other frontend frameworks in the future, like Vue, Solid or Svelte, and to even support additional back-end languages, like Python, Go or Rust.
If you’re the kind of developer that wishes a Rails or Laravel for JavaScript existed, then you should [give Wasp a try](https://wasp-lang.dev/) (and then [head into our Discord](https://discord.gg/rzdnErX) and let us know what you think)!
## Where Are We Headed?
We firmly believe that there will be a full-stack framework for JavaScript as there is Laravel for PHP and Ruby-on-Rails for Ruby.
It just seems like, at the moment, that we’re still working towards it. It also seems very likely that we will get there soon, given the popularity of current meta-frameworks and stacks like NextJS and T3.
But this stuff takes time, and patience.
Plus, you have to be bold enough to try something new, knowing you will get criticized for your work by some of the loudest voices in the ecosystem.
That’s what we’re prepared for and why we’re going *all in* with Wasp.
See you there!

| vincanger |
1,868,990 | Validation and controller setup in Nest JS | Love to work with you, You can hire me on Upwork. We have gone from defining first phase requirement... | 27,543 | 2024-05-29T13:09:56 | https://dev.to/depak379mandal/validation-and-controller-setup-in-nest-js-kfj | javascript, node, nestjs, typescript | Love to work with you, You can hire me on [Upwork](https://www.upwork.com/freelancers/~011463d96b5b87d7ff).
We have gone from defining first phase requirement to config load and TypeORM setup with migrations. Now we will look into setting up validation serialization and controller with Swagger Decorator’s.
We are starting with auth module, and we are going to create email controller instead of auth controller as we in future will add more services and controller for authentication provider like google, Facebook and LinkedIn. We already discussed APIs in getting started article, We are implementing those in here for controller and validation setup.
So I will just add controller with some comments and later will be explaining in details.
```typescript
// src/modules/auth/email.controller.ts
import { Body, Controller, HttpCode, HttpStatus, Post } from '@nestjs/common';
import {
ApiAcceptedResponse,
ApiCreatedResponse,
ApiForbiddenResponse,
ApiNoContentResponse,
ApiNotFoundResponse,
ApiOperation,
ApiTags,
} from '@nestjs/swagger';
import {
EmailVerifyDto,
LoginDto,
RegisterDto,
SendVerifyMailDto,
} from './email.dto';
import { EmailService } from './email.service';
// This is for swagger to aggregate all the APIs under this tag
@ApiTags('Auth Email')
// auth/email is path and it will get suffixed with /v1 as /v1/auth/email
@Controller({
path: 'auth/email',
version: '1',
})
export class EmailController {
// It will be used later right now we are just
// going to return whatever we are getting from body
constructor(private emailService: EmailService) {}
@Post('/register')
@ApiOperation({ summary: 'Register by email' })
@ApiCreatedResponse({
description: 'User successfully registered.',
})
// this actually overrides the default status code
@HttpCode(HttpStatus.CREATED)
async register(@Body() registerDto: RegisterDto) {
return registerDto;
}
@Post('/verify')
@ApiOperation({ summary: 'Verify Email address.' })
@ApiAcceptedResponse({
description: 'Email verified successfully.',
})
@HttpCode(HttpStatus.ACCEPTED)
async verify(@Body() emailVerifyDto: EmailVerifyDto) {
return emailVerifyDto;
}
@Post('/login')
@ApiOperation({ summary: 'Log in with Email.' })
@HttpCode(HttpStatus.OK)
async login(@Body() loginDto: LoginDto) {
return loginDto;
}
@Post('/send-verify-email')
@ApiOperation({ summary: 'Send Verification mail.' })
@ApiNoContentResponse({
description: 'Sent Verification mail.',
})
@ApiForbiddenResponse({
description: 'User already verified.',
})
@ApiNotFoundResponse({
description: 'User not found.',
})
@HttpCode(HttpStatus.NO_CONTENT)
async sendVerifyMail(@Body() sendVerifyMailDto: SendVerifyMailDto) {
return sendVerifyMailDto;
}
@Post('/reset-password-request')
@ApiOperation({ summary: 'Send Reset Password mail.' })
@ApiNoContentResponse({
description: 'Sent Reset Password mail.',
})
@ApiForbiddenResponse({
description: 'Please verify email first.',
})
@ApiNotFoundResponse({
description: 'User not found.',
})
@HttpCode(HttpStatus.NO_CONTENT)
async sendForgotMail(@Body() sendForgotMailDto: SendVerifyMailDto) {
return sendForgotMailDto;
}
}
```
```typescript
// src/modules/auth/email.service.ts
import { Injectable } from '@nestjs/common';
@Injectable()
export class EmailService {
constructor() {}
}
```
Have added some comments in the above code for understanding details of the code that are included. All the decorator starts with @Api are swagger decorators that generate code for us. So to define methods of the API we use @Get @Post @Patch @Head @Delete and @Put Decorators, they are very conveniently available from NestJS common library. You have to provide path, it automatically merges the base path from Controller decorator as suffix and starts working. You can also observer I have included emailService that we will discuss in the next article, But DTO’s will be in this article. DTO’s are for validation they are classes that have class-validator decorators and swagger decorators that help us the Body to validate and in same time it works as our swagger documentation.
You will see many decorators to even fetch some common data from request object or even Request itself. NestJS provide very convenient decorators for all those things. Like in above, I have used @Body decorator that actually gives me back body object from request. I can also fetch single field from body as @Body('email') previous code only fetches email from body, nothing else. We can also validate a particular body by providing a class with Validation Pipe. A Validation Pipe we are going to implement, but that will be global, not for every endpoint. So Validation Pipe Will take care of everything.
We need an endpoint for swagger too, So first we need all the DTO we have imported from email.dto.ts files and global validation pipe with swagger setup for our application.
```typescript
// src/modules/auth/email.dto.ts
import { ApiProperty } from '@nestjs/swagger';
import { Transform } from 'class-transformer';
import {
IsEmail,
IsNotEmpty,
IsString,
IsStrongPassword,
} from 'class-validator';
const strongPasswordConfig = {
minLength: 8,
minLowercase: 1,
minNumbers: 1,
minSymbols: 1,
minUppercase: 1,
};
export class RegisterDto {
@ApiProperty({ example: 'example@danimai.com' })
@IsEmail()
@Transform(({ value }) =>
typeof value === 'string' ? value.toLowerCase() : value,
)
email: string;
@ApiProperty({ example: 'Password@123' })
@IsString()
@IsStrongPassword(strongPasswordConfig)
password: string;
@ApiProperty({ example: 'Danimai' })
@IsString()
@IsNotEmpty()
first_name: string;
@ApiProperty({ example: 'Mandal' })
@IsString()
@IsNotEmpty()
last_name: string;
}
export class EmailVerifyDto {
@ApiProperty({ example: 'vhsbdjsdfsd-dfmsdfjsd-sdfnsdk' })
@IsString()
verify_token: string;
}
export class LoginDto {
@ApiProperty({ example: 'example@danimai.com' })
@IsEmail()
@Transform(({ value }) =>
typeof value === 'string' ? value.toLowerCase() : value,
)
email: string;
@ApiProperty({ example: 'Password@123' })
@IsString()
@IsStrongPassword(strongPasswordConfig)
password: string;
}
export class SendVerifyMailDto {
@ApiProperty({ example: 'example@danimai.com' })
@IsEmail()
@Transform(({ value }) =>
typeof value === 'string' ? value.toLowerCase() : value,
)
email: string;
}
export class ResetPasswordDto {
@ApiProperty({ example: 'Password@123' })
@IsString()
@IsStrongPassword(strongPasswordConfig)
password: string;
@ApiProperty({ example: 'vhsbdjsdfsd-dfmsdfjsd-sdfnsdk' })
@IsString()
reset_token: string;
}
```
```typescript
// src/utils/validation-options.ts
import {
HttpException,
HttpStatus,
ValidationError,
ValidationPipeOptions,
} from '@nestjs/common';
const validationOptions: ValidationPipeOptions = {
transform: true,
whitelist: true,
errorHttpStatusCode: HttpStatus.UNPROCESSABLE_ENTITY,
exceptionFactory: (errors: ValidationError[]) =>
new HttpException(
{
status: HttpStatus.UNPROCESSABLE_ENTITY,
errors: errors.reduce(
(accumulator, currentValue) => ({
...accumulator,
[currentValue.property]: Object.values(
currentValue.constraints,
).join(', '),
}),
{},
),
},
HttpStatus.UNPROCESSABLE_ENTITY,
),
};
export default validationOptions;
```
```typescript
// src/utils/bootstrap.ts
import {
INestApplication,
ValidationPipe,
VersioningType,
} from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { DocumentBuilder, SwaggerModule } from '@nestjs/swagger';
import { SerializerInterceptor } from './serializer.interceptor';
import validationOptions from './validation-options';
// for swagger documentation builder it takes application and gives back us with setup
export const documentationBuilder = (
app: INestApplication,
configService: ConfigService,
) => {
const config = new DocumentBuilder()
.addBearerAuth()
.setTitle(configService.get('app.name'))
.setDescription('The Danimai API description')
.setVersion('1')
.build();
const document = SwaggerModule.createDocument(app, config);
SwaggerModule.setup('docs', app, document);
};
// some global implementation in one function like URI versioning, global pipe and interceptor
export const createApplication = (app: INestApplication) => {
app.enableShutdownHooks();
app.enableVersioning({
type: VersioningType.URI,
});
app.useGlobalInterceptors(new SerializerInterceptor());
app.useGlobalPipes(new ValidationPipe(validationOptions));
return app;
};
```
```typescript
// src/utils/serializer.interceptor.ts
import {
Injectable,
NestInterceptor,
ExecutionContext,
CallHandler,
} from '@nestjs/common';
import { Observable } from 'rxjs';
@Injectable()
export class SerializerInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, next: CallHandler): Observable<unknown> {
return next.handle().pipe();
}
}
```
Above are some new things we have introduced, like validation pipe that helps us in documentation. But be with me, we will understand them one by one as we move forward, just include them as for now. As we right now have a bootstrap file, we can utilize that in our `main.ts` file.
```typescript
// src/main.ts
import { NestFactory } from '@nestjs/core';
import { AppModule } from './modules/app/app.module';
import { ConfigService } from '@nestjs/config';
import { createApplication, documentationBuilder } from './utils/bootstrap';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
const configService = app.get(ConfigService);
// bootstrapped functions
createApplication(app);
documentationBuilder(app, configService);
await app.listen(configService.get('app.port') || 8000);
}
bootstrap();
```
Even now we will not get any output as we don’t have included anything in auth.module.ts and that is not included in app.module.ts parent module file.
```typescript
// src/modules/auth/auth.module.ts
import { Module } from '@nestjs/common';
import { EmailController } from './email.controller';
import { EmailService } from './email.service';
@Module({
controllers: [EmailController],
providers: [EmailService],
})
export class AuthModule {}
```
And we include this module in main app module.
```typescript
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { configLoads } from '../config';
import { TypeOrmModule } from '@nestjs/typeorm';
import { TypeORMConfigFactory } from '../database/typeorm.factory';
import { AuthModule } from '../auth/auth.module';
// here we have included that
const modules = [AuthModule];
export const global_modules = [
ConfigModule.forRoot({
load: configLoads,
isGlobal: true,
envFilePath: ['.env'],
}),
TypeOrmModule.forRootAsync({
useClass: TypeORMConfigFactory,
}),
];
@Module({
imports: [...global_modules, ...modules],
})
export class AppModule {}
```
It is just first setup to cover everything else, in upcoming series. If now you will run the application using
```bash
npm run start:dev
```
It will show you this, where every router getting mapped with AuthModule Instance.
```bash
[10:23:03 PM] File change detected. Starting incremental compilation...
[10:23:03 PM] Found 0 errors. Watching for file changes.
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [NestFactory] Starting Nest application...
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [InstanceLoader] AppModule dependencies initialized +12ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [InstanceLoader] TypeOrmModule dependencies initialized +0ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +0ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [InstanceLoader] AuthModule dependencies initialized +5ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [InstanceLoader] ConfigModule dependencies initialized +0ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [InstanceLoader] TypeOrmCoreModule dependencies initialized +74ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [RoutesResolver] EmailController {/auth/email} (version: 1): +24ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [RouterExplorer] Mapped {/auth/email/register, POST} (version: 1) route +2ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [RouterExplorer] Mapped {/auth/email/verify, POST} (version: 1) route +0ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [RouterExplorer] Mapped {/auth/email/login, POST} (version: 1) route +1ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [RouterExplorer] Mapped {/auth/email/send-verify-email, POST} (version: 1) route +0ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [RouterExplorer] Mapped {/auth/email/reset-password-request, POST} (version: 1) route +0ms
[Nest] 29745 - 04/05/2024, 10:23:04 PM LOG [NestApplication] Nest application successfully started +2ms
```
And to view Swagger you have to go to `http://localhost:8000/docs/` in browser, it will show the UI for swagger

But above is nothing as it does not work as expected, it only takes body validates and returns error if invalid or returns same data if valid. So we need to implement services for this controller.
Let us meet in the next, and complete this API endpoints one by one and deep dive into more concept of Nest JS. Thank you | depak379mandal |
1,868,989 | The Future of Software Testing: Exploring Innovative Testing and Trends | As we dive deeper into the software testing landscape continues to evolve rapidly, driven by... | 0 | 2024-05-29T13:09:42 | https://dev.to/testree/the-future-of-software-testing-exploring-innovative-testing-and-trends-1dj2 | testing, coding, programming, performance | As we dive deeper into the software testing landscape continues to evolve rapidly, driven by technological advancements and the ever-increasing demand for high-quality software. Staying ahead in this dynamic field requires a keen understanding of the latest trends and innovations. In this blog, we will explore the cutting-edge trends and services shaping the future of software testing.
**AI and Machine Learning in Testing**
Artificial Intelligence (AI) and Machine Learning (ML) are revolutionizing software testing. These technologies enable predictive analysis, anomaly detection, and automated test case generation. AI-driven tools can learn from past testing cycles, predict potential failures, and optimize testing efforts, significantly reducing time and cost.
**Shift-Left and Shift-Right Testing**
The concepts of Shift-Left and Shift-Right testing emphasize early and continuous testing throughout the software development lifecycle. Shift-Left moves testing to the earliest stages of development, catching defects sooner and reducing the cost of fixing them. On the other hand, Shift-Right testing focuses on testing in production, using real user data to ensure software reliability and performance.
**Continuous Testing and DevOps Integration**
Continuous Testing, an integral part of the DevOps pipeline, ensures that quality is maintained throughout the development process. By integrating automated tests into CI/CD pipelines, teams can detect issues early, improve collaboration, and accelerate delivery cycles. This trend is crucial for maintaining agility and responsiveness in fast-paced development environments.
**Test Automation with CI/CD Tools**
Automation is at the heart of modern software testing strategies. Leveraging CI/CD tools for test automation ensures consistent execution of test cases, faster feedback loops, and reduced manual effort. Popular tools like Jenkins, GitLab CI, and CircleCI are now equipped with advanced testing capabilities, making automation more accessible and efficient.
**Performance Engineering over Performance Testing**
Performance Engineering is an evolved approach that goes beyond traditional performance testing. It focuses on integrating performance considerations throughout the development lifecycle, ensuring that performance is built into the software from the ground up. This proactive approach helps in identifying potential bottlenecks early and ensures a seamless user experience.
**Security Testing and DevSecOps**
With increasing cyber threats, security testing has become a critical aspect of software development. The integration of security practices into DevOps, known as DevSecOps, ensures that security is a shared responsibility across the development team. Automated security testing tools, static and dynamic analysis, and continuous monitoring are essential components of a robust DevSecOps strategy.
**The Rise of Codeless Testing Tools**
Codeless testing tools are democratizing the testing process, allowing non-technical team members to create and execute tests. These tools use visual interfaces and natural language processing to simplify test creation and maintenance. By reducing the reliance on code, codeless testing tools enhance collaboration and accelerate testing cycles.
**Exploratory Testing and Tester Creativity**
While automation and AI are transforming testing, the importance of human creativity in exploratory testing cannot be overstated. Exploratory testing involves simultaneous learning, test design, and test execution, enabling testers to uncover defects that automated tests might miss. Encouraging creativity and critical thinking in testers is essential for comprehensive software quality assurance.
**The Role of IoT in Software Testing**
The Internet of Things (IoT) is expanding the scope of software testing. Testing IoT applications requires a deep understanding of hardware-software interactions, connectivity issues, and security vulnerabilities. Specialized IoT testing tools and frameworks are emerging to address these unique challenges, ensuring the reliability and security of interconnected devices.
**Blockchain Testing for Decentralized Applications**
As blockchain technology gains traction, the need for specialized testing services for decentralized applications (dApps) is growing. Blockchain testing focuses on validating smart contracts, ensuring data integrity, and verifying the performance of decentralized networks. This niche area of testing is crucial for the adoption and success of blockchain solutions.
The future of [software testing is bright, with innovative services and trends](https://www.nousinfosystems.com/services/testing) driving the industry forward. By embracing AI and ML, continuous testing, performance engineering, and other emerging practices, organizations can ensure their software meets the highest quality standards. Staying updated with these trends will not only enhance the testing process but also contribute to delivering exceptional software products. | testree |
1,868,987 | Jetpack Compose -Difference between mutableStateOf() and derivedStateOf() | Hey Guys!!, Well thought of sharing my experience on those concepts, which might gets overlooked... | 0 | 2024-05-29T13:07:27 | https://dev.to/rishi2062/jetpack-compose-difference-between-mutablestateof-and-derivedstateof-43m9 | android, jetpackcompose, androiddev, opensource |

Hey Guys!!, Well thought of sharing my experience on those concepts, which might gets overlooked quite frequently.
Have You ever wondered the exact usecase of mutableStateOf() and derivedStateOf()
Let's say when you have some variable and you want to trigger an UI change basically recomposition , when that variable changes, most likely you will use mutableStateOf()
```
val count = mutableStateOf(1)
```
However to make sure recomposition value must be stored you need to use remember, maybe next time i will focus on this
Now think of a situation, where you are storing some value, on that value change you automatically want to modify other variable, then derivedStateOf() is preferable
for ex :-
```
val height by remember{ mutableStateOf(10) }
val width by remember{ mutableStateOf(10) }
val area by remember{ derivedStateOf(height * width) }
```
in this way area will automatically change whenever there is any change in dimensions variables and composable observing area will also trigger recomposition.
In this manner it becomes more readable, less error prone
Now you may think this can be achieved by mutableStateOf() with something like
```
val height by remember{ mutableStateOf(10) }
val width by remember{ mutableStateOf(10) }
val area by remember{ mutableState(height * width) }
```
Here if there is any change in height and width , you need to manually update it
```
area = height * width
```
You can also do this by
```
val height by remember{ mutableStateOf(10) }
val width by remember{ mutableStateOf(10) }
val area by remember(height,width){ mutableState(height * width) }
```
now if there is any change in height and weight , area will also update
but here also the main difference is restricting composition count , when you are having long calculations and variable that changes very frequently,
Lets end this with my supporting word for last statement,
where i found this good example at StackOverFlow
```
@Composable
fun CounterButton() {
val clicks = remember { mutableStateOf(0) }
val counter = remember(clicks.value / 5) { mutableStateOf(clicks.value / 5) }
Button(
onClick = { clicks.value++ }
) {
Text("${counter.value} clicked")
}
}
```
counter variable is supposed to be change on every 5 clicks, say you pressed 30 clicks so then button will recompose on every click but the CounterButton() method will compose 5 times,
```
@Composable
fun CounterButton() {
val clicks = remember { mutableStateOf(0) }
val counter = remember { derivedStateOf { clicks.value / 5 } }
Button(
onClick = { clicks.value++ }
) {
Text("${counter.value} clicked")
}
}
```
here CounterButton() will have no recomposition,
But there is a huge pitfall or drawback for this, as it will not change the reference of the state object instead rely on state value.
You can learn more here - [DerivedState Pitfall](https://medium.com/@bagadeshrp/derivedstateof-pitfall-jetpack-compose-9487ff1cc6ee)
Already the article went long, but if you are here, Thanks a lot for you time and if you want to update any fact , logic to lets bombard comment section and happy coding.
| rishi2062 |
1,868,985 | Day 6 of my progress as a vue dev | About today So, I started to work on my new project, the DSA visualizer and I had many fun ideas on... | 0 | 2024-05-29T13:06:26 | https://dev.to/zain725342/day-6-of-my-progress-as-a-vue-dev-1159 | webdev, vue, typescript, tailwindcss | **About today**
So, I started to work on my new project, the DSA visualizer and I had many fun ideas on which approach I should go with. After a bit of brainstorming I ended up deciding on that I want to keep the project limited but extremely playful so that it is fun to use and understand the concepts of DSA. I'm initially dealing with stack, queue, linked-list and binary search tree. I think for now these are easiest to implement and lay down the ground work for the future structures that maybe added in the future. I have started the implementation and already ended up with a decent animated visual presentation of stack.
**What's next**
I will continue this pace and implement all the structures first and then move on to minor improvements to make the usage a little more appealing. Next is queue to be implemented which I will work on tomorrow.
**Improvements required**
I need to define a certain animation style that is subtle and playful. My aim is to make so that even a 6 year old is able to understand the concept while the animation and the stylistic approach doesn't seem to childish for a grown up.
Wish me luck!
| zain725342 |
1,868,983 | Unlocking the Power of Data Integration | In today's hyper-competitive global marketplace, where brands vie for supremacy, the establishment of... | 0 | 2024-05-29T13:05:03 | https://dev.to/linda0609/unlocking-the-power-of-data-integration-3546 | In today's hyper-competitive global marketplace, where brands vie for supremacy, the establishment of grandiose office premises in every target market signifies not just physical presence, but strategic dominance. However, global brands encounter the formidable challenge of managing myriad supplier and distributor relationships across diverse geographic regions. To navigate this intricate web effectively and leverage their [operations](https://www.sganalytics.com/data-management-analytics/operations-analytics/) to the fullest extent, brands rely on the indispensable tool of data integration.
Understanding Data Integration:
Data integration is a multifaceted process involving the acquisition, harmonization, and consolidation of business-critical information from disparate sources. These sources span internal organizational databases, social media platforms, independent publishing networks, and other online sources powered by the vast expanse of the World Wide Web. By seamlessly amalgamating data from these diverse origins, analysts can construct comprehensive reporting assets that unveil macroscopic performance trends and risk factors, all without the need for laborious manual data collection from each individual source.
Scalability and adaptability are crucial facets of effective data integration. Leveraging the flexibility and agility of cloud ecosystems, modern data integration platforms offer a suite of integrated [business intelligence services](https://www.sganalytics.com/data-management-analytics/bi-data-visualization-services/), complete with bespoke data visualization capabilities tailored to the unique requirements of each organization. The underlying technologies, ranging from streaming and unification to report generation, are carefully selected based on the organization's specific integration strategies and objectives.
The Critical Importance of Data Integration:
The burgeoning demand for robust data integration solutions stems from several compelling factors:
1. Comprehensive Business Performance Analysis: Integrated data provides organizations with a panoramic view of their performance landscape, enabling stakeholders to delve into various performance metrics and gain invaluable insights into company-level strengths, weaknesses, opportunities, and threats.
2. Breaking Down Silos: Data integration dismantles the barriers of departmental silos, democratizing access to vital business intelligence assets across the organization. This fosters a culture of collaboration and knowledge-sharing, facilitating the free flow of ideas and insights among diverse professional domains.
3. Enhanced Standardization: Integration illuminates areas requiring standardization improvements, ensuring consistency in data formatting and presentation. Automated tools play a pivotal role in identifying and rectifying potential standardization discrepancies, thereby ensuring uniformity and coherence in data interpretation.
4. Heightened Productivity: Automation of manual data entry tasks translates into significant time savings for employees, bolstering overall productivity. Liberated from mundane operational duties, professionals can channel their energies towards tackling more complex challenges and driving innovation within their respective roles.
5. Personalized Customer Experience: By aggregating and analyzing customer data from disparate sources, organizations can craft tailored experiences that resonate with individual preferences and behaviors. This personalized approach is instrumental in bolstering customer retention and engagement metrics, thereby enhancing overall brand loyalty and profitability.
Data Integration Strategies for Streamlining Information:
Several data integration strategies hold the key to streamlining information flow across organizations:
1. Data Consolidation: This strategy involves centralizing data storage locations, necessitating robust hardware infrastructure and resilient network connectivity. While offering expedited access to data and reporting, this approach may entail substantial budgetary allocations, particularly for organizations grappling with voluminous data sets.
2. Data Streaming: Data streaming, in contrast, leaves data at its point of origin, leveraging modern connectivity technologies to recreate instantaneous copies of data objects. While obviating the need for additional data storage resources, this strategy places greater demands on networking infrastructure.
3. Data Propagation: Combining the benefits of consolidation and streaming, data propagation involves creating local copies of remote data assets for prolonged durations, thereby ensuring data availability and accessibility across disparate locations.
Streamlining Business Information with Data Integration: Real-world Examples:
1. E-Commerce, Accounting, and Inventory Management: Seamlessly integrating data from online marketplaces, accounting software, and inventory management systems empowers organizations to optimize product assortments and pricing strategies in real-time, based on transactional data and market demand insights.
2. Marketing, Sales, and Design: By integrating historical data on customer interactions with various marketing and sales initiatives, organizations can facilitate cross-functional collaboration between marketing, sales, and design teams. This convergence of expertise fosters innovative strategies and campaigns tailored to customer preferences and market dynamics.
3. Sustainability, IT, and Legal Compliance: With cybersecurity and sustainability emerging as paramount concerns in today's business landscape, integrated data on compliance metrics and legal risk exposures equips organizations to navigate regulatory complexities and mitigate potential liabilities effectively.
In Conclusion:
Data integration is essential for businesses to gain insights from their data, improve decision-making, and drive innovation. By breaking down data silos and creating a unified view of information, organizations can better understand their operations, customers, and market trends. Additionally, data integration enables advanced analytics techniques such as data mining, predictive modeling, and machine learning, which can uncover valuable insights and opportunities for growth.
Data integration emerges as a linchpin in liberating organizations from the constraints of siloed data repositories, offering panoramic insights into organizational performance and operational efficiency. While each integration strategy presents its unique set of advantages and challenges, the overarching goal remains the same: to empower organizations with the tools and insights needed to thrive in an increasingly dynamic and competitive business environment. As businesses continue to embrace data integration as a strategic imperative, the market size is poised to surpass 39.2 billion US dollars by 2032, heralding a new era of enhanced productivity, innovation, and sustainable growth. | linda0609 | |
1,868,981 | Innovation in Los Angeles: A Hub of Creativity and Technology | Los Angeles, widely celebrated for its entertainment industry, is also a thriving hub for innovation... | 0 | 2024-05-29T13:01:46 | https://dev.to/stevemax237/innovation-in-los-angeles-a-hub-of-creativity-and-technology-i36 | appdevelopment | Los Angeles, widely celebrated for its entertainment industry, is also a thriving hub for innovation and technology. Beyond the glitz and glamour of Hollywood, this sprawling city is home to a dynamic ecosystem of startups, tech giants, and creative minds driving forward-thinking projects across various sectors. From groundbreaking app development to pioneering sustainable technologies, Los Angeles is at the forefront of modern innovation.
## The Rise of App Development in Los Angeles
App development is a significant part of Los Angeles's tech scene. The [App Development Companies in Los Angeles](https://mobileappdaily.com/directory/mobile-app-development-companies/us/los-angeles?utm_source=dev&utm_medium=hc&utm_campaign=mad
) are known for creating cutting-edge applications across various industries, including entertainment, healthcare, finance, and e-commerce. The innovative spirit of Los Angeles shines brightly in its dynamic and diverse app development companies.
## Leading App Development Companies in Los Angeles
Several app development companies in Los Angeles have gained national and international recognition for their innovative solutions. Here are a few notable ones:
Dogtown Media: Known for its expertise in mobile app development, Dogtown Media has created over 200 apps in areas like healthcare, finance, and the Internet of Things (IoT). Their innovative approach to app design and development has earned them numerous accolades.
STRV: This app development company specializes in mobile and web solutions, offering services from design to development and post-launch support. STRV's clients include prominent brands like Microsoft, Hallmark, and Barnes & Noble.
Tack Mobile: Tack Mobile focuses on creating user-friendly mobile applications and IoT solutions. Their team of experienced developers and designers collaborates with clients to build apps that stand out in the competitive market.
Swenson He: A full-service firm specializing in custom mobile and web app development, Swenson He is known for its innovative and tailored solutions. Their projects often incorporate the latest technologies, such as artificial intelligence and blockchain.
## Innovation in App Development
Innovation in app development goes beyond creating functional apps. Los Angeles-based developers are leveraging emerging technologies to enhance user experience and deliver more sophisticated solutions. Some of the key trends in app development include:
Artificial Intelligence (AI) and Machine Learning (ML): Many LA app developers integrate AI and ML into their applications to provide personalized experiences, improve efficiency, and offer advanced analytics. AI-powered chatbots and recommendation engines are becoming standard features in apps.
Augmented Reality (AR) and Virtual Reality (VR): AR and VR technologies are revolutionizing industries like gaming, real estate, and education. Los Angeles, with its strong entertainment background, is at the forefront of developing AR and VR applications that offer immersive experiences.
Blockchain Technology: Blockchain is enhancing security, transparency, and trust in various applications, particularly in finance and supply chain management. LA developers are exploring innovative ways to incorporate blockchain into their solutions.
Internet of Things (IoT): IoT technology is transforming how devices interact with each other and with users. App developers in Los Angeles are creating IoT solutions that enable smart homes, connected cars, and other advanced applications.
## The Birthplace of Creative Innovation
Los Angeles has always been synonymous with creativity. While it’s best known for Hollywood, the city’s creative energy extends far beyond film and television. This rich, artistic spirit now infuses its tech and digital media scenes, creating a fertile ground for innovative ideas.
The city’s diversity and cultural heritage play a big role in fostering this innovative environment. Here, artists, engineers, and entrepreneurs from all backgrounds collaborate to create unique solutions to global challenges. This melting pot of cultures and perspectives is a significant driver of the city's success in innovation.
## Silicon Beach: The Epicenter of Tech Innovation
Silicon Beach, a coastal area in Los Angeles, has emerged as a central hub for tech startups and established companies. Stretching from Santa Monica to Venice, this region is often compared to Silicon Valley because of its dense concentration of tech firms and its vibrant entrepreneurial culture.
Big names like Google, Snap Inc., and Hulu have set up offices here, drawing top talent and creating a competitive yet collaborative environment. Numerous incubators and accelerators, such as the Los Angeles Cleantech Incubator (LACI) and Amplify LA, support early-stage startups in scaling their businesses.
## The Role of Incubators and Accelerators
Incubators and accelerators are crucial in Los Angeles’s innovation landscape. They provide startups with essential resources like funding, mentorship, and networking opportunities, helping bridge the gap between innovative ideas and successful businesses.
For instance, LACI focuses on clean technology and sustainability, supporting startups that are developing solutions to environmental challenges. Their work has positioned Los Angeles as a leader in the green tech industry, with innovations ranging from renewable energy to sustainable transportation.
| stevemax237 |
1,868,980 | TypeORM integration with migrations in NestJS | Love to work with you, You can hire me on Upwork. To even think about TypeORM we must install all... | 27,543 | 2024-05-29T13:00:52 | https://dev.to/depak379mandal/typeorm-integration-with-migrations-in-nestjs-b98 | javascript, typescript, node, nestjs | Love to work with you, You can hire me on [Upwork](https://www.upwork.com/freelancers/~011463d96b5b87d7ff).
To even think about TypeORM we must install all the required libraries in the application, already included command in getting started article. We can run below code to install TypeORM with pg Postgres driver.
```bash
npm install --save @nestjs/typeorm typeorm pg
```
We can define a factory class/function that will return all the required details/config for our TypeORM module. It will be helpful to segregate modules in terms of their functionality.
Even before everything we need a database, I have created nest-series database in Postgres using DBeaver. You can use PGAdmin or any other tool just connect to your local DB Or you can use remote as per your interest. After that You have to create a connection URL, that URL will look like below.
```ini
DATABASE_URL=postgresql://<username>:<password>@localhost:5432/nest-series
```
Just replace above with your credentials and place it in .env. Now we move to our config factory class. Config class will use details ormconfig.ts file, So ormconfig.ts can be used in npm command as data source.
```typescript
// ormconfig.ts
import { DataSource } from 'typeorm';
import * as dotenv from 'dotenv';
import { PostgresConnectionOptions } from 'typeorm/driver/postgres/PostgresConnectionOptions';
dotenv.config();
export const configs: PostgresConnectionOptions = {
type: 'postgres',
url: process.env.DATABASE_URL,
entities: [__dirname + '/src/**/*.entity.{ts,js}'],
migrations: [__dirname + '/src/modules/database/migrations/*{.ts,.js}'],
dropSchema: false,
logging: false,
};
const dataSource = new DataSource(configs);
export default dataSource;
```
```typescript
// src/modules/database/typeorm.factory.ts
import { Injectable } from '@nestjs/common';
import { TypeOrmModuleOptions, TypeOrmOptionsFactory } from '@nestjs/typeorm';
import ORMConfig from '../../../ormconfig';
@Injectable()
export class TypeORMConfigFactory implements TypeOrmOptionsFactory {
createTypeOrmOptions(): TypeOrmModuleOptions {
return ORMConfig.options;
}
}
```
We can use Injectable class to use configService as I have mentioned in earlier article that we will be using config service as an injected service. In constructor, we can inject configService as ConfigService. We have implemented TypeOrmOptionsFactory the interface to follow the particular contract of factory config class. Now we can use this factory in App Module file where we are registering our TypeORM Module.
```typescript
// src/modules/app/app.module.ts
...
import { TypeOrmModule } from '@nestjs/typeorm';
import { TypeORMConfigFactory } from '../database/typeorm.factory';
const modules = [];
export const global_modules = [
...
TypeOrmModule.forRoot({
useClass: TypeORMConfigFactory,
}),
];
@Module({
imports: [...global_modules, ...modules],
})
export class AppModule {}
```
After this, you can run npm run start:dev to confirm if anything is going wrong in setup for TypeORM. As we move forward, we need entities as per our requirement we have defined in the first article. We require entities, we will start from user entity But user also require base entity that will automatically handle the timestamps and ID for us.
```typescript
// src/entities/base.ts
import { ApiProperty } from '@nestjs/swagger';
import {
BaseEntity as _BaseEntity,
CreateDateColumn,
DeleteDateColumn,
PrimaryGeneratedColumn,
UpdateDateColumn,
} from 'typeorm';
export abstract class BaseEntity extends _BaseEntity {
// we are using uuid instead of integers
@ApiProperty()
@PrimaryGeneratedColumn('uuid')
id: string;
@ApiProperty()
@CreateDateColumn()
created_at: Date;
@ApiProperty()
@UpdateDateColumn()
updated_at: Date;
@DeleteDateColumn()
deleted_at: Date;
}
```
I have introduced swagger decorator above, You can see ApiProperty decorator that handles Swagger doc generation. deleted_at will not be exposed in swagger doc, So we are not adding decorator above for it. One little thing you might have noticed that I have created an abstract class, reason behind that is we don’t want any table related to Base entity or any other tracking from TypeORM. Now we move to user entity and user token entity.
```typescript
// src/entities/user.entity.ts
import { Column, Entity } from 'typeorm';
import { ApiHideProperty, ApiProperty } from '@nestjs/swagger';
import { BaseEntity } from './base';
@Entity({ name: 'users' })
export class User extends BaseEntity {
@ApiProperty({ example: 'Danimai' })
@Column({ type: 'varchar', length: 50 })
first_name: string;
@ApiProperty({ example: 'Mandal' })
@Column({ type: 'varchar', length: 50, nullable: true })
last_name: string;
@ApiProperty({ example: 'example@danimai.com' })
@Column({ type: 'varchar', length: 255, unique: true })
email: string;
@ApiProperty({ example: 'Password@123' })
@Column({ type: 'varchar', length: 255, nullable: true })
password: string;
@ApiHideProperty()
@Column({ type: 'timestamp with time zone', nullable: true })
email_verified_at: Date;
@ApiHideProperty()
@Column({ type: 'boolean', default: false })
is_active: boolean;
}
```
```typescript
import { randomStringGenerator } from '@nestjs/common/utils/random-string-generator.util';
import { BeforeInsert, Column, Entity, JoinColumn, ManyToOne } from 'typeorm';
import { User } from './user.entity';
import { BaseEntity } from './base';
export enum TokenType {
REGISTER_VERIFY = 'REGISTER_VERIFY',
RESET_PASSWORD = 'RESET_PASSWORD',
}
@Entity({ name: 'user_tokens' })
export class Token extends BaseEntity {
@Column({ type: 'varchar', length: 100 })
token: string;
@Column({ type: 'boolean', default: false })
is_used: boolean;
@Column({ type: 'enum', enum: TokenType })
type: TokenType;
@Column({ type: 'timestamp' })
expires_at: Date;
@Column({ type: 'uuid' })
user_id: string;
@ManyToOne(() => User, (user) => user.tokens)
@JoinColumn({ name: 'user_id' })
user: User;
// This decorator allows to run before insert command
// setting up token automatically before insert
@BeforeInsert()
async generateToken() {
// generate long token for registration and forgot password
this.token = `${randomStringGenerator()}-${randomStringGenerator()}`;
}
}
```
We also need something we called migration files to be generated for these entities So everyone can sync their DB. to do that we are going to add scripts for migration in package.json
```json
{
...
"scripts": {
...
"migration:generate": "typeorm-ts-node-commonjs migration:generate src/modules/database/migrations/migrations -d ormconfig.ts",
"migration:run": "typeorm-ts-node-commonjs -d ormconfig.ts migration:run",
"migration:revert": "typeorm-ts-node-commonjs -d ormconfig.ts migration:revert"
},
...
}
```
After adding above command in package.json file we now can generate migration file using npm run migration:generate command and then to migrate them `npm run migration:run`, if we think that migration was wrong Or wanted to edit just revert the migration in DB by `npm run migration:revert`. So if you wanted to follow the migration, you can just follow me. Run below command.
```bash
npm run migration:generate
npm run migration:run
```
After running generate command, we will get a migration file in `src/modules/database/migrations` folder, mine is.
```typescript
// src/modules/database/migrations/1712332008837-migrations.ts
import { MigrationInterface, QueryRunner } from 'typeorm';
export class Migrations1712332008837 implements MigrationInterface {
name = 'Migrations1712332008837';
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`CREATE TYPE "public"."user_tokens_type_enum" AS ENUM('REGISTER_VERIFY', 'RESET_PASSWORD')`,
);
await queryRunner.query(
`CREATE TABLE "user_tokens" ("id" uuid NOT NULL DEFAULT uuid_generate_v4(), "created_at" TIMESTAMP NOT NULL DEFAULT now(), "updated_at" TIMESTAMP NOT NULL DEFAULT now(), "deleted_at" TIMESTAMP, "token" character varying(100) NOT NULL, "is_used" boolean NOT NULL DEFAULT false, "type" "public"."user_tokens_type_enum" NOT NULL, "expires_at" TIMESTAMP NOT NULL, "user_id" uuid NOT NULL, CONSTRAINT "PK_63764db9d9aaa4af33e07b2f4bf" PRIMARY KEY ("id"))`,
);
await queryRunner.query(
`CREATE TABLE "users" ("id" uuid NOT NULL DEFAULT uuid_generate_v4(), "created_at" TIMESTAMP NOT NULL DEFAULT now(), "updated_at" TIMESTAMP NOT NULL DEFAULT now(), "deleted_at" TIMESTAMP, "first_name" character varying(50) NOT NULL, "last_name" character varying(50), "email" character varying(255) NOT NULL, "password" character varying(255), "email_verified_at" TIMESTAMP WITH TIME ZONE, "is_active" boolean NOT NULL DEFAULT false, CONSTRAINT "UQ_97672ac88f789774dd47f7c8be3" UNIQUE ("email"), CONSTRAINT "PK_a3ffb1c0c8416b9fc6f907b7433" PRIMARY KEY ("id"))`,
);
await queryRunner.query(
`ALTER TABLE "user_tokens" ADD CONSTRAINT "FK_9e144a67be49e5bba91195ef5de" FOREIGN KEY ("user_id") REFERENCES "users"("id") ON DELETE NO ACTION ON UPDATE NO ACTION`,
);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`ALTER TABLE "user_tokens" DROP CONSTRAINT "FK_9e144a67be49e5bba91195ef5de"`,
);
await queryRunner.query(`DROP TABLE "users"`);
await queryRunner.query(`DROP TABLE "user_tokens"`);
await queryRunner.query(`DROP TYPE "public"."user_tokens_type_enum"`);
}
}
```
And after `npm run migration:run` it should fill up your DB like mine

As we are moving forward, we are arranging our structure, So the current structure looks pretty good and convincing in terms of clean code.
```bash
src
├── entities
│ ├── base.ts
│ ├── user.entity.ts
│ └── user_token.entity.ts
├── main.ts
└── modules
├── app
│ └── app.module.ts
├── auth
├── config
│ ├── app.config.ts
│ ├── database.config.ts
│ └── index.ts
├── database
│ ├── migrations
│ │ └── 1712332008837-migrations.ts
│ └── typeorm.factory.ts
└── user
```
If you run the `npm run start:dev` You will see below If having any doubts or issues, please mention them in comments.
```bash
[9:23:31 PM] Starting compilation in watch mode...
[9:23:33 PM] Found 0 errors. Watching for file changes.
[Nest] 26803 - 04/05/2024, 9:23:34 PM LOG [NestFactory] Starting Nest application...
[Nest] 26803 - 04/05/2024, 9:23:34 PM LOG [InstanceLoader] AppModule dependencies initialized +11ms
[Nest] 26803 - 04/05/2024, 9:23:34 PM LOG [InstanceLoader] TypeOrmModule dependencies initialized +1ms
[Nest] 26803 - 04/05/2024, 9:23:34 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +0ms
[Nest] 26803 - 04/05/2024, 9:23:34 PM LOG [InstanceLoader] ConfigModule dependencies initialized +9ms
[Nest] 26803 - 04/05/2024, 9:23:35 PM LOG [InstanceLoader] TypeOrmCoreModule dependencies initialized +183ms
[Nest] 26803 - 04/05/2024, 9:23:35 PM LOG [NestApplication] Nest application successfully started +6ms
```
In the next article, we will be having our controllers and validation in place. So we can attach our Service in next series of articles. | depak379mandal |
1,869,020 | Making the Most of LLMs with AI Agent Tools | AI agents are all the rage. With the ever-improving quality of Large Language Models, the demand... | 0 | 2024-06-11T14:28:56 | https://blog.composio.dev/ai-agent-tools/ | ---
title: Making the Most of LLMs with AI Agent Tools
published: true
date: 2024-05-29 12:59:37 UTC
tags:
canonical_url: https://blog.composio.dev/ai-agent-tools/
---

AI agents are all the rage. With the ever-improving quality of Large Language Models, the demand for AI automation is also increasing. The Large Language or Multi-modal models are efficient at reasoning, summarizing, general question and answering, etc. The efficient reasoning abilities enable LLMs to analyze complex tasks and break them into smaller sub-tasks.
However, to fully leverage their capabilities in complex automation scenarios, LLMs require the right tools. By equipping them with such tools, these models can intelligently decide which tool to use and when to use it during task execution. To make sure the tasks are executed properly the tools need to be reliable.
Composio is the platform that provides production-ready tool integration with LLM frameworks like LangChain, AutoGen, and CrewAi to build reliable AI agents. Composio’s repertoire has 150+ out-of-the-box tools for apps across the genre like CRM, Productivity, SDE, etc, and it also lets you easily add custom tools.
So, this article will explore AI agents, agent tools, and specifically Composio’s tool integrations.
## Learning Objectives
- Learn about AI agents, their definition, working, and usefulness.
- Understand what agent tools are.
- Explore Composio tool integrations to empower agents.
- Learn about custom tools integration on Composio.
## What are AI agents?
We now have a brief idea about agents, let’s dig a bit more. So, the Agents are pieces of software that can dynamically interact with their environment, and the AI in the term “AI Agents” refers to Large Language Models (LLMs) or Large Multi-modal Models (LMMs).
LLMs possess great reasoning ability (thanks to extensive training in reasoning tasks). This enables them to analyze a complex task step-by-step. When the LLMs have access to the right tools, they can break down a problem statement and use the right tools to execute tasks as and when needed. The best example of this would be the ChatGPT app itself. It has access to code interpreters, the Internet, and DallE. Based on the given query, it decides which tool to use. If you ask it to create an image, it will use Dalle, for executing codes, it will choose code interpreter. However, the agents do not always need to be in a chat app. We can use external apps like Discord, Slack, GitHub, etc to trigger an agentic workflow. For instance, an agent with Slack and Notion integration will trigger when a message is sent in the Slack channel. The agent will pick up the task, execute it, write it to a Notion doc, and return a confirmation message in the Slack channel.
So, AI agents are LLMs augmented with tools and goals.
## What are Agent tools?
Tools are software that enables the LLMs to execute a given task. They are interfaces for LLMs allowing them to interact with the external environment. Tools provide the agency for LLMs to carry out a given task. It's like painting; you need good brushes, colors, and a canvas to make something great.
With this in mind, Composio offers over 150 agent toolkits, each packed with built-in actions and triggers. It's like having a fully stocked art supply store at your disposal. This ensures that whatever the project, you've got the tools needed to tackle it efficiently.
## Composio’s Comprehensive Toolkit
Imagine you're managing multiple projects across different platforms. Normally, this could involve a lot of manual coordination and checking in and might become tedious and time-consuming. That's where Composio's tools come into play. For instance, you could set up an automation that syncs your project tasks between GitHub and a project management tool like Trello or Asana. Whenever an issue is updated in GitHub, it updates your project board.
Or you can use Composio’s Typeform and Google Sheet integrations to automate user feedback collection and update it to Google Sheet for further downstream tasks like CRM integration for lead management, trend analysis, etc.
These are only a few examples. You can integrate multiple Composio tools across the categories with the LLM frameworks of your choice to automate complex workflows.
Check out the official tools supported here in the [Tools Catalog](https://app.composio.dev/apps?category=all&ref=blog.composio.dev).

For a detailed look at how Composio operates, read this walk-through guide, where Composio’s Slack and Notion integration is used to create an AI research assistant.
[Build an AI Research Assistant Using CrewAI and Composio](https://www.analyticsvidhya.com/blog/2024/05/ai-research-assistant-using-crewai-and-composio/?ref=blog.composio.dev)
## Custom AI Tools
Composio also gives developers the freedom to build [custom integration](https://docs.composio.dev/introduction/foundations/components/integrations/custom-integration?ref=blog.composio.dev)s for specific needs. All you need to do is follow a few structured steps using the OpenAPI Spec. Here's how to get started:
- **Create or Obtain OpenAPI Spec** : Begin by acquiring the OpenAPI Spec for the application you want to integrate with Composio. If your chosen application lacks an OpenAPI Spec, you can create one using the [Swagger Editor.](https://editor.swagger.io/?ref=blog.composio.dev)
- **Create the `integrations.yaml` File** : Prepare an **`integrations.yaml`** file using the provided base template. This file should be customized to include the authentication schemes suitable for the tool you are integrating, detailing essential aspects such as the application's name, description, and authentication methods.
- **Fill Out Authentication Details** :
- Select the appropriate authentication method—OAuth1, OAuth2, API-KEY, or BASIC—based on what the custom tool supports.
- For tools like GitHub, utilize OAuth2 and include necessary details such as **`authorization_url`** , **`token_url`** , **`default_scopes`** , and other relevant parameters.
- **Push and Copy Repository URL** : Once your **`integrations.yaml`** file is ready, push the changes to your repository and copy the URL of this repository.
- **Add Your Custom Tool on Composio** :
- Navigate to the settings page on Composio.
- Access the "Add Open API spec" section.
- Upload both the OpenAPI Spec file and the **`integrations.yaml`** file.
- Initiate the integration by clicking on the "Start import" button.
- **Test Your Custom Tool on Composio** :
- Return to the tools catalog on Composio.
- Locate and select the newly created tool.
- Connect your account to use the tool and ensure it functions as expected.
With these steps, you can efficiently integrate and automate your workflows using Composio, harnessing the full potential of its expansive toolkit to meet your specific project needs.
Here’s a quick video for adding a custom tool to Composio.
<iframe width="200" height="150" src="https://www.youtube.com/embed/hTtyKpl6G2g?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Add Custom Tools to Composio using OpenAPI Spec"></iframe>
## Conclusion
AI Agents are inevitable and in the future, it is safe to assume, that most software systems will integrate AI agents in one way or another, From automating mundane day-to-day tasks to handling complex enterprise operations, the scope is vast. Composo provides a robust platform designed to meet these needs. With 150+ production-ready agent tools with many actions and triggers developers can create reliable and useful AI agents that work.
Tool sets from different categories like Productivity, SDE, CRM, social media, marketing, design, and more can be tailored to specific tasks, ensuring versatility across all business functions. Whether automating communication channels, code deployments, managing customer relationships, enhancing social media engagement, driving marketing campaigns, or facilitating design processes, Composio’s tools adapt to the unique demands of each domain. This will allow businesses to operate more efficiently, reduce costs, and focus human efforts on more creative tasks.
## Frequently Asked Question about AI Agent Tools
1. What are AI agents?
Ans. The AI agents are LLMs augmented with specific tools and goals that enable them to carry out complex tasks autonomously.
1. What are Agent tools?
Ans. Agent tools are software that serves as the interface between Large Language Models (LLMs) and external applications, enabling the completion of tasks.
1. What is Composio?
Ans. Composio is a platform that provides production-ready tool integration with LLM frameworks to build reliable AI agents for automating complex workflows. | sohamganatra | |
1,868,978 | ReactJS Interview Questions and Answers | Q1. What is React JS? React JS is a JavaScript library used for building user interfaces,... | 0 | 2024-05-29T12:57:46 | https://dev.to/lalyadav/reactjs-interview-questions-and-answers-3586 | react, reactjsdevelopment, reactjsinterview, reactjsinterviewquestions | **Q1. What is React JS?**
**[React JS](https://www.onlineinterviewquestions.com/react-js-interview-questions/)** is a JavaScript library used for building user interfaces, particularly single-page applications. Developed by Facebook, it allows developers to create reusable UI components.

**Q2. What are the key features of React JS?**
React JS offers features like virtual DOM for efficient rendering, component-based architecture for reusability, JSX syntax for easy templating, and one-way data binding for predictable state management.
**Q3. What is JSX in React?**
JSX (JavaScript XML) is a syntax extension used in React for describing the UI components. It allows developers to write HTML-like code within JavaScript, making it easier to visualize and understand the structure of UI components.
**Q4. What is a component in React?**
In React, a component is a self-contained module that encapsulates a piece of UI. Components can be reusable and can represent any part of a user interface, from simple buttons to complex forms.
**Q5. What is a state in React?**
State in React is an object that represents the current data of a component. It can be modified over time in response to user actions or other events, triggering UI updates. State management is essential for building interactive and dynamic user interfaces.
**Q6. What is props in React?**
Props (short for properties) are a way to pass data from parent components to child components in React. They are read-only and are used to customize the behavior and appearance of child components based on the parent’s data.
**Q7. What is the difference between state and props in React?**
State is managed internally by a component and can be modified over time, while props are passed from parent components and are immutable within the child component. State is used for internal component data, while props are used for inter-component communication.
**Q8. What are the lifecycle methods in React?**
React components have lifecycle methods that allow developers to hook into different stages of a component’s lifecycle, such as mounting, updating, and unmounting. Examples include componentDidMount, componentDidUpdate, and componentWillUnmount.
**Q9. What is the significance of keys in React lists?**
Keys are special attributes used by React to identify and track elements in lists. They help React efficiently update the UI by providing a stable identity to each list item, enabling better performance and avoiding unnecessary re-renders. | lalyadav |
1,868,977 | Where Can You Find The Best Tractor Blower? | Keeping your property clear of leaves, snow, or debris can be a real chore, especially with large... | 0 | 2024-05-29T12:55:57 | https://dev.to/ekendra_ode_8855cdb8b4b95/where-can-you-find-the-best-tractor-blower-19la | Keeping your property clear of leaves, snow, or debris can be a real chore, especially with large areas to cover. That's where a tractor blower comes in – it's like a super-powered leaf blower attached to your trusty tractor, ready to tackle tough jobs with ease. But with so many options on the market, where do you even begin to look? Don't worry, we've got you covered!
Here are some tips to help you find the best tractor blower for your needs:
Size Matters: Consider the size of the area you need to clear. Bigger blowers are great for vast fields, while smaller ones might be perfect for pathways or driveways. Tractor blowers come in various sizes, so measure your space and choose accordingly.
Blowing Power: Think about the type of debris you'll be dealing with. For light leaves, a less powerful blower might suffice. But for heavy, wet leaves, snow, or branches, you'll need a blower that packs a punch. Look for specifications on airflow and blowing force to make the right call.
Attachment Style: Tractor blowers can be three-point hitch mounted (attached to the back of your tractor) or PTO-driven (powered by the tractor's engine). Three-point hitch blowers are simpler to attach, while PTO-driven blowers offer more power. Consider your comfort level and the size of your tractor when making this choice.
Fancy Features (Optional): Some tractor blowers come with cool features like adjustable nozzles to direct the airflow or remote controls for added convenience. These features can be helpful, but they also add to the cost. Decide what functionalities are important to you and choose a blower that fits your budget.
Once you've considered these factors, you can start researching specific tractor blower brands and models. Many farm equipment stores or online retailers will carry a variety of options. Don't hesitate to ask questions or read customer reviews to find a blower that gets the job done right.
With a little planning, you'll be well on your way to finding the perfect tractor blower to keep your property clean and debris-free, all season long!
| ekendra_ode_8855cdb8b4b95 | |
1,868,976 | Boost Your Skills Online With AI Certification | Artificial intelligence has emerged as a hotbed for career development as a technology professional... | 0 | 2024-05-29T12:53:50 | https://dev.to/ailearning/boost-your-skills-online-with-ai-certification-233o | aicertification, aicertificationcourse, onlineaicourse, ai | Artificial intelligence has emerged as a hotbed for career development as a technology professional in 2024. Following the generative AI boom in 2023, it is inevitable to think of potential ways to develop your career as an AI professional. At this point in time, an [AI certification](https://futureskillsacademy.com/certification/certified-ai-professional/) can help you earn more advantages than the lucrative salary packages for AI professionals.
Artificial intelligence boosts productivity alongside encouraging economic growth, thereby suggesting the availability of promising career prospects for AI experts. Therefore, many aspiring candidates seek valuable resources for career development as an AI professional. Let us learn more about the ways in which a certification in AI technology can help you.
## What is an AI Certification?
Artificial intelligence or AI certifications are special credentials that prove an individual's fluency in AI technologies, such as machine learning, data analytics, and deep learning. An [artificial intelligence certification](https://futureskillsacademy.com/certification/certified-ai-professional/) can help you stand out from the crowd with proof of your expertise in AI. As AI becomes a mandatory priority of businesses in different industries, it is important to harness the power of such credentials to prove your capabilities as an AI professional.
### How Can AI Certifications Help You?
The decision to choose AI certifications can be difficult for beginners. You might have many doubts regarding the value of certifications in the continuously evolving field of AI. However, an AI certification course offers a broad range of benefits to help you achieve career goals in artificial intelligence. Here are some of the most notable benefits of choosing the best AI certifications for your career.
#### - Skill Development
The most obvious reason to choose AI certifications is the assurance of comprehensive skill development. You must have the skills required to help businesses make the most of AI technologies such as machine learning, natural language processing, deep learning, and neural networks. A certified AI professional can solve complex problems and foster innovation across different business operations in diverse industries. On top of it, the skills you learn in a certification course can be useful for many other roles. Your AI skills can also help you gain a competitive advantage in your existing job with better productivity.
#### - Multiple Opportunities for Career Development
Artificial intelligence certifications can also open avenues to a wide range of career development opportunities. As the demand for AI professionals continues to grow, an AI certification may be the best resource to help you capitalize on job opportunities in different roles. With a certification, you can become a valuable asset in the AI job market. As a result, you would be eligible for better rewards and growth prospects as compared to other AI professionals.
#### - Maintain an Additional Edge in Your Career
The next crucial benefit of AI certifications is the assurance of maintaining an additional edge. How does an artificial intelligence certification give you an additional edge? Certifications help you stay relevant in the constantly evolving AI job market.
The rising use of AI as an integral aspect of business operations implies that employers would prioritize certified professionals. The best AI certifications not only ensure that you improve your AI skills but also help you familiarize yourself with the latest trends and new advancements in artificial intelligence.
### Where Can You Find the Best AI Certifications?
The advantages of artificial intelligence certifications prove that they are invaluable assets for professional development. However, it is difficult to pick an AI certification course from the massive assortment of alternatives available on different online platforms. At the same time, you can find professional certification courses with accreditation and the advantage of self-paced online learning. Make sure that you choose a certification course that can bring you recognition in the industry alongside enhancing your skills.
### Final Words
The value of artificial intelligence certifications has increased by huge margins with the growth in demand for certified AI experts. You can find promising alternatives such as the [AI certification course](https://futureskillsacademy.com/certification/certified-ai-professional/) by Future Skills Academy. With the right certification course by your side, you can not only improve your AI skills but also discover promising career paths in the domain of AI. Learn more about the CAIP certification and find out whether it is the ideal pick for your AI career now.
| ailearning |
1,847,125 | Ollama Meets AMD GPUs | Large Language Models (LLMs) are revolutionizing the way we interact with machines. Their... | 0 | 2024-05-26T04:25:37 | https://dev.to/ajeetraina/ollama-meets-amd-gpus-1f1c | Large Language Models (LLMs) are revolutionizing the way we interact with machines. Their ever-growing complexity demands ever-increasing processing power. This is where accelerators like GPUs come into play, offering a significant boost for training and inference tasks.
The good news? Ollama, a popular self-hosted large language model server, now joins the party with official support for AMD GPUs through ROCm! This blog dives into how to leverage this exciting new development, even if your Ollama server resides within a Kubernetes cluster.
## Ollama Meets AMD GPUs
A Match Made in Compute Heaven. Ollama's integration with ROCm allows you to utilize the raw power of your AMD graphics card for running LLMs. This translates to faster training times and smoother inference experiences. But wait, there's more!
## Benefits of AMD + ROCm for Ollama:
- Cost-effective performance: AMD GPUs offer exceptional value for money, making them a great choice for budget-conscious LLM enthusiasts.
- Open-source advantage: ROCm, the open-source platform powering AMD's GPU ecosystem, fosters a collaborative environment and continuous development.
## Setting Up Ollama with AMD and ROCm on Kubernetes
Here's how to deploy Ollama with ROCm support on your Kubernetes cluster:
1. Install the ROCm Kubernetes Device Plugin:
This plugin facilitates communication between Ollama and your AMD GPU. Follow the official guide at https://github.com/ROCm/k8s-device-plugin/blob/master/README.md for installation instructions.
2. Deploy Ollama with ROCm Support (using Kubernetes YAML):
The YAML configuration you provided offers a solid template:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama-rocm
spec:
replicas: 1
selector:
matchLabels:
app: ollama-rocm
template:
metadata:
labels:
app: ollama-rocm
spec:
containers:
- name: ollama
image: ollama/ollama:rocm
ports:
- containerPort: 11434
name: ollama
volumeMounts:
- name: ollama-data
mountPath: /root/.ollama
resources:
requests:
memory: "32Gi"
cpu: "64"
limits:
memory: "100Gi"
cpu: "64"
amd.com/gpu: 1
volumes:
- name: ollama-data
hostPath:
path: /var/lib/ollama/.ollama
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: ollama-service-rocm
spec:
selector:
app: ollama-rocm
ports:
- protocol: TCP
port: 11434
targetPort: 11434
name: ollama
```
## Key points to note:
1. The ollama/ollama:rocm image ensures you're using the ROCm-compatible version of Ollama.
2. The amd.com/gpu: 1 resource request signifies your desire to utilize one AMD GPU for Ollama.
3. Exposing Ollama Services:
The provided Service definition exposes Ollama's port (11434) for external access.
## Important Note:
The provided Docker Compose configuration snippet seems to be for Nvidia GPUs and won't work for AMD with ROCm. Refer to Ollama's documentation for configuration specific to ROCm.
## Unleash the Power of Your AMD GPU with Ollama!
With Ollama and ROCm working in tandem on your AMD-powered Kubernetes cluster, you're well-equipped to tackle demanding LLM tasks. Remember to consult Ollama's official documentation for detailed instructions and troubleshooting. Happy experimenting! | ajeetraina | |
1,867,939 | mysql: using json data and not hating it | it's a universal truth that one of the primary jobs of backend developers is drinking from the... | 0 | 2024-05-29T12:51:40 | https://dev.to/gbhorwood/mysql-using-json-data-and-not-hating-it-6lc | mysql, json | it's a universal truth that one of the primary jobs of backend developers is drinking from the firehose of json the frontends are constantly spraying at us. normally, we pick apart all those key/value pairs and slot them into the neatly-arranged columns of our db, keeping our database structure nice and [normal](https://datasciencehorizons.com/database-normalization-a-practical-guide). but sometimes there are good reasons to just store a whole json object or array as an undifferentiated blob.
in the dark, old days, this json would go in a `text` column, which was fine if all we needed to do was store it and return it. but if we needed to do something more complex like, extract certain values from that json or, god forbid, use one in a `WHERE` clause, things could get out of hand pretty quickly.
fortunately, modern mysql supports `JSON` as a native data type, allowing us to work with json data in a way that's <i>almost</i> pleasurable. in this post we're going to go over extracting values from json and using json data in `WHERE` clauses.
## creating a `JSON` column
let's start by creating a table with a column of type `JSON`.
```sql
CREATE TABLE `some_data` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(200) NOT NULL,
`some_json` json NULL,
`some_text` text NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
the test data we're going to be using is the json representation of bratmobile's 'kiss and ride' single[https://www.discogs.com/master/409458-Bratmobile-Kiss-And-Ride].
```json
{
"artist": "Bratmobile",
"title": "Kiss and Ride",
"format": "single",
"year": 1992,
"label": "Kill Rock Stars",
"tracks": {
"A": [
{
"title": "Kiss and Ride",
"duration_seconds": 88
}
],
"B": [
{
"title": "No You Don't",
"duration_seconds": 105
},
{
"title": "Queenie",
"duration_seconds": 79
}
]
}
}
```
we'll insert that json into both a native `JSON` type column as well as into a `TEXT` column. we'll come back to that text col later.
```sql
INSERT INTO some_data VALUES(
null,
'some name',
-- as json
'{ "artist": "Bratmobile", "title": "Kiss and Ride", "format": "single", "year": 1992, "label": "Kill Rock Stars", "tracks": { "A": [ { "title": "Kiss and Ride", "duration_seconds": 88 } ], "B": [ { "title": "No You Don\'t", "duration_seconds": 105 }, { "title": "Queenie", "duration_seconds": 79 } ] } }',
-- as string
'{ "artist": "Bratmobile", "title": "Kiss and Ride", "format": "single", "year": 1992, "label": "Kill Rock Stars", "tracks": { "A": [ { "title": "Kiss and Ride", "duration_seconds": 88 } ], "B": [ { "title": "No You Don\'t", "duration_seconds": 105 }, { "title": "Queenie", "duration_seconds": 79 } ] } }'
);
```
## extracting some data from that json
now that we have some json in our `JSON` column, we can work on selecting out individual values. we do this with mysql's `JSON_EXTRACT` function.
`JSON_EXTRACT` takes two arguments: the column and the path of the element you want to select. paths are separated by a dot, kind of the same way directory paths are separated by a `/`, and the top level of the path is denoted by `$`. if you've ever struggled with [`jq`](https://jqlang.github.io/jq/), you should feel comfortable struggling with this.
in this example, we select the `artist` element from the top level of our json object.
```sql
SELECT JSON_EXTRACT(some_json, '$.artist') as artist
FROM some_data
WHERE id = 1;
+--------------+
| artist |
+--------------+
| "Bratmobile" |
+--------------+
```
mysql also provides the `->` operator as a shorthand for `JSON_EXTRACT` to help make our queries a little cleaner.
```sql
SELECT some_json -> '$.artist' as artist
FROM some_data
WHERE id = 1;
+--------------+
| artist |
+--------------+
| "Bratmobile" |
+--------------+
```
an annoyance we notice immediately with these results is that they're quote-enclosed. we don't want this. nobody wants this. let's remove them with the function `JSON_UNQUOTE`
```sql
SELECT JSON_UNQUOTE(JSON_EXTRACT(some_json, '$.artist')) as artist
FROM some_data
WHERE id = 1;
+------------+
| artist |
+------------+
| Bratmobile |
+------------+
```
the shorthand `->` operator we used before can be modified to `->>` to automatically includ `JSON_UNQUOTE`. using this operator is the same as enclosing a call to `JSON_EXTRACT` in `JSON_UNQUOTE`:
```sql
SELECT some_json ->> '$.artist' as artist
FROM some_data
WHERE id = 1;
+------------+
| artist |
+------------+
| Bratmobile |
+------------+
```
## dealing with arrays
getting single elements from the top level is great, but we're also going to want to deal with arrays.
we can reference arrays in the path we pass to `JSON_EXTRACT` by using an index inside square brackets. this is pretty familiar stuff. for instance, in our sample data we have an object called `tracks` that contains two arrays keyed as `A` and `B`; these are the songs on side 'a' and 'b'. those `A` and `B` elements are arrays of song objects. if we wanted to get the first song of side `B`, we would construct our `JSON_EXTRACT` call like so:
```sql
SELECT JSON_EXTRACT(some_json, '$.tracks.B[0].title') as side_b_song_one
FROM some_data
WHERE id = 1;
+-----------------+
| side_b_song_one |
+-----------------+
| "No You Don't" |
+-----------------+
```
looking at the path argument, we can see we start at the top of the hierarchy with `$`, then reference the `tracks` object. next is the array `B`. we indicate we want the first element of this array with `[0]`, then continue our path by giving the key `title`. the result is the title of the first song on side b.
if we want _all_ the songs on side b, we can replace that index with the wildcard `*`. this returns a json array of titles.
```sql
SELECT JSON_EXTRACT(some_json, '$.tracks.B[*].title') as side_b
FROM some_data
WHERE id = 1;
+-----------------------------+
| side_b |
+-----------------------------+
| ["No You Don't", "Queenie"] |
+-----------------------------+
```
we could also use the `**` wildcard to do this job.
```sql
SELECT JSON_EXTRACT(some_json, '$.tracks.B**.title') as side_b
FROM some_data
WHERE id = 1;
+-----------------------------+
| side_b |
+-----------------------------+
| ["No You Don't", "Queenie"] |
+-----------------------------+
```
it's important to note that `**` actually means _all_ the paths between the prefix and the suffix. this means that we can use it to get all of the `title` elements under `tracks`, regardless of whether there in the array keyed `A` or `B` like so:
```sql
SELECT JSON_EXTRACT(some_json, '$.tracks**.title') as side_a_and_side_be
FROM some_data
WHERE id = 1;
+----------------------------------------------+
| side_a_and_side_be |
+----------------------------------------------+
| ["Kiss and Ride", "No You Don't", "Queenie"] |
+----------------------------------------------+
```
in that example, `**` matches both `A` and `B`. very powerful stuff.
we can also use ranges when defining our array indexes. for instance, if we wanted to get all the `title`s on side `B` between position zero and one, we could use the index `[0 to 1]`:
```sql
SELECT JSON_EXTRACT(some_json, '$.tracks.B[0 to 1].title') as side_b_first_two
FROM some_data
WHERE id = 1;
+-----------------------------+
| side_b_first_two |
+-----------------------------+
| ["No You Don't", "Queenie"] |
+-----------------------------+
```
and if all we want is the last element of an array, mysql allows us to use the literal word `last`:
```sql
SELECT JSON_EXTRACT(some_json, '$.tracks.B[last].title') as side_b_last_track
FROM some_data
WHERE id = 1;
+-------------------+
| side_b_last_track |
+-------------------+
| "Queenie" |
+-------------------+
```
### using this in `WHERE` clauses
extracting data from json is useful but, really, we could just select the whole column and run the data through `json_decode` in our controller. the _real_ money is using this in `WHERE` clauses. let's look:
```sql
SELECT JSON_EXTRACT(some_json, '$.title') as title
FROM some_data
WHERE JSON_EXTRACT(some_json, '$.year') = 1992;
+-----------------+
| title |
+-----------------+
| "Kiss and Ride" |
+-----------------+
```
here we just extracted `year` from our json column and used in a `WHERE` clause.
## a kludge to simulate `WHERE IN` clauses
if we want to write a `WHERE IN` clause against an array in our json object, things get a bit trickier. our initial ideal might be to simply try using `IN` on an array extracted from our json:
```sql
-- this absolutely DOES NOT WORK
SELECT JSON_EXTRACT(some_json, '$.title') as title
FROM some_data
WHERE "No You Don't" IN JSON_EXTRACT(some_json, '$.tracks**.title');
```
no dice. the problem here, of course, is that `WHERE IN` expects a list of values, not a json array. there's a type mismatch.
the solution is to use `MEMBER OF`.
this construct behaves kind of like a function and kind of like an operator. it takes it's left value and tests if it resides in the json array that is passed to it as an argument.
```sql
SELECT JSON_UNQUOTE(JSON_EXTRACT(some_json, '$.artist')) as artist,
JSON_UNQUOTE(JSON_EXTRACT(some_json, '$.title')) as title
FROM some_data
WHERE "Queenie" MEMBER OF(CAST(JSON_EXTRACT(some_json, '$.tracks**.title') AS json));
+------------+---------------+
| artist | title |
+------------+---------------+
| Bratmobile | Kiss and Ride |
+------------+---------------+
```
the thing to note here is the argument to `MEMBER OF`. we have extracted all the `title` values from our json columns and mashed them together into one array. that 'array' isn't really an array, though; it's a string. and not even a json string, just a string string.
since `MEMBER OF` requires json-style data as an argument, we need to call `CAST` to turn the string of our array into an actual json array.
this solution is a bit messay, and if i am kludging up the wrong tree here and there is a better, more elegant way to go about this, i am very much open to input. however, this construct <i>does</i> work.
## wait. this works on `TEXT` columns too?
the secret is that all of the stuff we've covered here also works on `TEXT` columns. if we run that `WHERE IN` select against our `some_text` column, it works fine:
```sql
SELECT JSON_UNQUOTE(JSON_EXTRACT(some_json, '$.artist')) as artist,
JSON_UNQUOTE(JSON_EXTRACT(some_json, '$.title')) as title
FROM some_data
WHERE "Queenie" MEMBER OF(CAST(JSON_EXTRACT(some_text, '$.tracks**.title') AS json));
+------------+---------------+
| artist | title |
+------------+---------------+
| Bratmobile | Kiss and Ride |
+------------+---------------+
```
so why bother with the `JSON` column type at all? well, most importantly, when we use the `JSON` type we get automatic validation on insert or update. we can put invliad json into `TEXT` columns and mysql will never complain; it will just wait for us to be disappointed at select time.
secondly, the native `JSON` type is far more efficient. can we use extracted json values in a five table join? yes. will it be painfully slow if we're using `TEXT`? also yes. native `JSON` will be much faster.
finally, there will almost certainly be new json functions in future versions of mysql (including, hopefully, a more-sane way to do `WHERE IN`s) and there is no guarantee that they will behave well with `TEXT`. bottom line: if you know you're putting json in a column, make it a `JSON` column.
> 🔎 this post was originally written in the [grant horwood technical blog](https://gbh.fruitbat.io/2024/05/28/mysql-using-json-data-and-not-hating-it/)
| gbhorwood |
1,868,974 | Top Plugins for Importing HTML Content into WordPress | WordPress is a powerful and flexible content management system (CMS) that has become a favorite among... | 0 | 2024-05-29T12:48:21 | https://dev.to/__6097f/top-plugins-for-importing-html-content-into-wordpress-332n | WordPress is a powerful and flexible content management system (CMS) that has become a favorite among developers and content creators. One of the common tasks for many website owners is importing HTML content into their WordPress sites. This process can be streamlined significantly by using the right plugins. Here, we'll explore some of the top plugins that make it easy to import HTML to WordPress, ensuring a smooth and efficient transition.
Why Use Plugins to Import HTML Content?
Using plugins to import HTML to WordPress simplifies the process, saves time, and minimizes the risk of errors. These plugins are designed to handle the intricacies of converting HTML files into WordPress posts or pages, maintaining the integrity of your content and formatting. Additionally, plugins can offer customization options to fit your specific needs, making them invaluable tools in website migration and content management.
Top Plugins for Importing HTML Content
1. HTML Import 2
HTML Import 2 is a robust plugin designed specifically to [import HTML to WordPress](https://hutko.dev/blog/5-easy-steps-to-html-import-content-into-wordpress/). It allows users to batch import multiple HTML files into WordPress posts or pages. This plugin is highly customizable, enabling users to map HTML elements to WordPress content fields, ensuring that your imported content retains its structure and formatting.
Key Features:
Batch import capabilities
Customizable HTML element mapping
Supports custom post types
2. WP All Import
WP All Import is a versatile plugin that supports importing various types of content into WordPress, including HTML files. With a user-friendly drag-and-drop interface, it simplifies the process of importing and mapping HTML content to WordPress fields. This plugin is especially useful for larger projects where you need to import a significant amount of content.
Key Features:
Drag-and-drop interface
Supports large file imports
Custom field and taxonomy support
3. Import HTML Pages
As the name suggests, Import HTML Pages is a straightforward plugin designed to import HTML to WordPress. It focuses on simplicity and ease of use, making it an excellent choice for users who need to quickly import HTML pages without dealing with complex settings. The plugin automatically converts HTML files into WordPress pages, preserving the original structure and layout.
Key Features:
Simple and intuitive interface
Automatic HTML to WordPress conversion
Retains original HTML structure
4. FG Joomla to WordPress
While primarily designed for migrating Joomla sites to WordPress, FG Joomla to WordPress also offers robust support for importing HTML content. This plugin can be particularly useful if your HTML content originates from a Joomla site. It ensures that all your content, including articles, media, and metadata, is accurately imported into WordPress.
Key Features:
Joomla to WordPress migration
HTML content import support
Media and metadata preservation
5. WP Crawler
WP Crawler is a powerful plugin that can be used to import HTML to WordPress by scraping content from any website. This plugin is highly configurable, allowing users to specify which elements to import and how to map them to WordPress fields. WP Crawler is ideal for users who need to migrate content from multiple HTML sources or websites.
Key Features:
Web scraping capabilities
Customizable element mapping
Supports multiple content sources
How to Choose the Right Plugin
Selecting the right plugin to import HTML to WordPress depends on your specific needs and the complexity of your project. Consider the following factors:
Volume of Content: If you need to import a large number of HTML files, choose a plugin that supports batch imports.
Customization: Look for plugins that offer customizable mapping of HTML elements to WordPress fields.
Ease of Use: If you're not tech-savvy, opt for plugins with a user-friendly interface and straightforward setup.
Compatibility: Ensure the plugin is compatible with your WordPress version and other installed plugins.
| __6097f | |
1,868,971 | Client Side Ad Insertion Crash Course | This guide acts as a quick overview of concepts relevant to CSAI, it is not meant to act as a... | 0 | 2024-05-29T12:43:28 | https://dev.to/video/client-side-ad-insertion-crash-course-4fdm | csai, ads, video, streaming | This guide acts as a quick overview of concepts relevant to CSAI, it is not meant to act as a complete tutorial.
## What is CSAI?
Client Side Ad Insertion means that the app on the user's device is responsible for fetching and displays advertisement as part of the streaming experience. This is in contrast to Server Side Ad Insertion, where the ads are stitched into the video on the server.
## VMAP
The [VMAP](https://www.iab.com/guidelines/vmap/) is fetched or polled by the client, it is parsed to check when an ad break should occur.
## VAST
The [VAST](https://www.iab.com/guidelines/vast/) document is fetched and parsed by the client, it represents the content of an ad break.
The VAST document can contain several types of ads. For example: video, images, and companion links.
The VAST specifies how an ad should be tracked. It says which events need to be sent to a tracking server in order to verify if and how much of the ad was watched and/or interacted with.
## Displaying the Ads
When it is time to show the ad to the user, it is usually handled by pausing the content stream, creating a new video element, overlaying it on the content stream, and playing the ad(s). When the ad break is finished, remove the ad specific video element and resume the content stream. | martinstark |
1,868,973 | Building multi platform games in Flutter News 2024 #21 ʚїɞ | Hey Flutter enthusiasts! Ever worry about missing key Flutter updates? Well, worry no... | 26,008 | 2024-05-29T12:42:42 | https://dev.to/lucianojung/i-will-develop-any-flutter-ui-for-your-flutter-app-development-in-flutter-news-2024-21-eyie-nh4 | flutter, news, dart, discuss | ## Hey Flutter enthusiasts!
Ever worry about missing key Flutter updates? Well, worry no more!
Starting 2024, I'm here to keep you informed with a weekly Monday report. Let's stay ahead in the world of Flutter!
## Table of Contents
1. {% cta #mayor-flutter-updates %} Mayor Flutter updates {% endcta %}
2. {% cta #new-flutter-videos %} New Flutter Videos {% endcta %}
3. [New Flutter Packages](#new-flutterpackages)
4. [New Dev Posts](#new-devposts)
5. [New Medium Posts](#new-mediumposts)
---
## Mayor Flutter updates:
> There are no mayor flutter updates this week!
-> Currently [Flutter Version Google I/O 3.22](https://docs.flutter.dev/release/whats-new)
---
## New Flutter Videos:
> The [Flutter YouTube Channel](https://youtube.com/@flutterdev?si=RZyl1nLVnSt373Vu) did post new Videos:
{% embed https://www.youtube.com/watch?v=x2WOHonEwqM %}
\#GoogleIO, #Flutter
{% embed https://www.youtube.com/watch?v=7mG_sW40tsw %}
\#Flutter
---
## New Flutter-Packages
{% details [flutter_picker_plus](https://pub.dev/packages/flutter_picker_plus) (Version 1.1.2) %} Provide flexible parameters to meet various needs with NumberPicker, DateTimePicker, ArrayPicker, and default linkage Picker.
\#flutter {% enddetails %}
{% details [localization_pro](https://pub.dev/packages/localization_pro) (Version 1.0.1) %} A dynamic Flutter package for seamless localization management.
\#flutter, #shared_preferences {% enddetails %}
{% details [torch_flashlight](https://pub.dev/packages/torch_flashlight) (Version 0.0.2) %} A Flutter plugin for controlling the torch (flashlight) on/off functionality in mobile devices.
\#flutter, #flutter_web_plugins, #plugin_platform_interface {% enddetails %}
{% details [openvidu_flutter](https://pub.dev/packages/openvidu_flutter) (Version 0.0.11) %} Migration from openvidu-android to Flutter. The package contains a working example that targets OpenVidu 2.29.0.
\#crypto, #cupertino_icons, #flutter, #flutter_webrtc, #html, #intl, #logging {% enddetails %}
{% details [device_preview_plus](https://pub.dev/packages/device_preview_plus) (Version 1.2.0) %} Approximate how your Flutter app looks and performs on another device
\#collection, #device_frame, #flutter, #flutter_localizations, #freezed_annotation, #json_annotation, #provider, #shared_preferences {% enddetails %}
---
### New Dev-Posts
{% embed https://dev.to/devlifeofbrian/a-day-in-my-dev-life-7-1pgm %}
{% embed https://dev.to/n968941/understanding-low-code-mobile-app-development-platforms-pi8 %}
{% embed https://dev.to/gabbygreat/number-input-on-flutter-textfields-the-right-way-4ip0 %}
{% embed https://dev.to/n968941/understanding-the-technology-behind-flutter-4afc %}
{% embed https://dev.to/harsh8088/implementing-ssl-pinning-in-flutter-3e8a %}
---
### New Medium-Posts
{% details [I will develop any Flutter UI for your Flutter app development](https://medium.com/@withmbilal/i-will-develop-any-flutter-ui-for-your-flutter-app-development-f3cfec49cad5) by Muhammad Bilal %} I will develop any Flutter UI for your Flutter app development. If you are going to develop an app for yourself, then the UI Design matters a lot. Good UI means a greater number of conversions..
\Flutter, Hire Mobile App Developer, Flutter App Development, App Development, App Development Company {% enddetails %}
{% details [I will be your flutter developer and build an Android ios flutter app with Firebase](https://medium.com/@withmbilal/i-will-be-your-flutter-developer-and-build-an-android-ios-flutter-app-with-firebase-f69f53140ada) by Muhammad Bilal %} If you want someone to develop your Hybrid Flutter app on Android, IOS to increase your business or for your client. Keep reading. We are a team of Flutter developers with 3+ years of experience in…
\Flutter, Flutter App Development, Flutter Widget, Flutter Ui, Flutter Web {% enddetails %}
{% details [How can I enter the job market and start my career in programming](https://medium.com/@mamybhwr/how-can-i-enter-the-job-market-and-start-my-career-in-programming-e5db203a779f) by elmami bhwr %} Hello creators! Today we will talk about a very important topic for all those who want to start a career in the program and enter the active labor market. Whether you are a beginner or have…
\Flutter, Flutter App Development, Flutter Widget, Flutter Ui, Flutter Web {% enddetails %}
{% details [Difference between core stack (Java & kotlin) and Flutter](https://medium.com/@harshilsuthar427/difference-between-core-stack-java-kotlin-and-flutter-d5ceba1e0e1d) by Harshil Suthar %} The differences between app development using core stacks like Java and Kotlin versus using Flutter are significant in terms of language, development approach, and the capabilities offered. Here’s a…
\Java, Flutter, Kotlin, Flutter App Development, Difference {% enddetails %}
{% details [Automatically launching “build_runner watch” for FlutterDart](https://medium.com/@akaita/automatically-launching-build-runner-watch-for-flutter-d15ecd0facd9) by Akaita %}
\ {% enddetails %}
---
Last Flutter News: [Flutter News 2024 #20 ʚїɞ](https://dev.to/lucianojung/series/26008)
_Did I miss any recent updates? Feel free to share any important news I might have overlooked!_ | lucianojung |
1,868,972 | Key Principles to Lead in the Digital Age | In the fast-paced digital age, effective leadership requires a transformative approach that adapts to... | 0 | 2024-05-29T12:42:22 | https://victorleungtw.com/2024/05/29/lead/ | leadership, innovation, digital, collaboration | In the fast-paced digital age, effective leadership requires a transformative approach that adapts to the evolving landscape. Here, we explore six essential principles that leaders must embrace to navigate and thrive in this dynamic environment.

#### 1. Customer Focus
**Putting the customer at the heart of everything:**
In the digital age, customer expectations are higher than ever. Leaders must prioritize understanding and meeting these needs. This involves gathering customer insights through data analytics, direct feedback, and market research. By fostering a culture that prioritizes customer satisfaction, organizations can build loyalty and drive continuous improvement.
**Case in point:** Amazon's obsession with customer satisfaction has driven its innovation and operational efficiency, making it a global leader in e-commerce.
#### 2. Output Orientation
**Focusing on results, not just activities:**
Output orientation means concentrating on the outcomes rather than the processes. Leaders should set clear goals, measure performance based on results, and continuously adjust strategies to meet objectives. This principle emphasizes efficiency and effectiveness, ensuring that every effort contributes to the overall mission.
**Case in point:** Google's OKR (Objectives and Key Results) framework exemplifies how focusing on specific outcomes can drive significant achievements and innovation.
#### 3. Rapid Experimentation
**Embracing agility and innovation through experimentation:**
In a rapidly changing digital landscape, the ability to quickly test and iterate on ideas is crucial. Leaders should create an environment that encourages experimentation, tolerates failure, and learns from it. This approach allows organizations to innovate continuously and stay ahead of the competition.
**Case in point:** Netflix's experimentation with different content types and distribution models has enabled it to become a dominant player in the streaming industry.
#### 4. Cross-Boundary Collaboration
**Breaking down silos for integrated solutions:**
Digital transformation often requires collaboration across different departments, geographies, and even industries. Leaders must foster a culture of teamwork and open communication, enabling diverse perspectives to come together and create holistic solutions. Cross-boundary collaboration leads to more comprehensive and innovative outcomes.
**Case in point:** The collaboration between Apple and various healthcare providers to develop HealthKit and ResearchKit showcases the power of cross-industry partnerships in driving innovation.
#### 5. Adaptability in Uncertainty
**Navigating change with resilience and flexibility:**
The digital age is marked by constant change and uncertainty. Leaders must be adaptable, ready to pivot strategies, and resilient in the face of challenges. This requires a proactive mindset, continuous learning, and the ability to foresee and respond to emerging trends and disruptions.
**Case in point:** Microsoft's transformation under Satya Nadella's leadership, embracing cloud computing and AI, demonstrates adaptability in an ever-evolving tech landscape.
#### 6. Empowering Team
**Fostering a culture of empowerment and trust:**
Empowering team members involves giving them the autonomy to make decisions, encouraging innovation, and providing the resources and support they need to succeed. Leaders should build trust, offer mentorship, and create opportunities for professional growth. Empowered teams are more motivated, engaged, and capable of driving the organization forward.
**Case in point:** Spotify's squad model allows small, autonomous teams to work on different parts of the product, fostering a culture of empowerment and rapid innovation.
### Conclusion
Leading in the digital age requires a shift from traditional leadership models to a more dynamic and responsive approach. By focusing on customer needs, emphasizing output, embracing experimentation, promoting collaboration, adapting to uncertainty, and empowering teams, leaders can navigate the complexities of the digital era and drive their organizations toward sustained success.
| victorleungtw |
1,868,970 | EX280 Exam Dumps: Insider Techniques for Excellence | Focus on Weak Areas: Use brain dumps to identify your weak areas and prioritize learning those... | 0 | 2024-05-29T12:39:36 | https://dev.to/wekire4314/ex280-exam-dumps-insider-techniques-for-excellence-2g6 | Focus on Weak Areas: Use brain dumps to identify your weak areas and prioritize learning those topics. Devote more time to the areas you struggle with and use the brain dump as a guide to address gaps in your knowledge. Practice regularly: Incorporate brain dump exercises into your study routine. Regular practice **EX280 Exam Dumps ** not only enhances your learning, but also helps you familiarize yourself with the format of the test and improve your speed and accuracy.
Simulating exam conditions: When you use Braindump to practice, you are simulating the exam environment. Set a timer, limit distractions, and stick to the rules and format of the test. This will prepare you mentally and physically for the actual exam.
Watch your mistakes: Analyze the mistakes you made while practicing Braindump. Understand why you made a mistake in the question, learn from your mistakes, and avoid repeating them in the future.
CLICK HERE FOR MORE INFO>>>>>>>>>>>>>>> https://dumpsarena.com/redhat-dumps/ex280/
| wekire4314 | |
1,867,553 | Java 21: The Magic Behind Virtual Threads | To understand virtual threads very well, we need to know how Java threads work. Quick Introduction... | 0 | 2024-05-29T12:37:58 | https://dev.to/elayachiabdelmajid/java-21-virtual-threads-1h5b | java, multitheading, virtualthread, javathread | To understand virtual threads very well, we need to know how Java threads work.
- **Quick Introduction to Java Threads (Platform Threads) and How They Work**
First, let's review the relationship between the threads we've been creating (Java threads) and the OS threads. Whenever we create an object of type Thread, that object, among other things, contains the code that needs to execute and the start method. When we run that start method, we ask the OS to create and start a new OS thread belonging to our application's process, and ask the JVM to allocate a fixed-size stack space to store the thread's local variables from that point on. The OS is fully responsible for scheduling and running the thread on the CPU, just like any other thread.

So, in a sense, that Thread object inside the JVM is just a thin layer or wrapper around an OS thread.
From now on, we're going to call this type of Java thread a platform thread. As we've already seen, those platform threads are expensive and heavy, because each platform thread maps 1-to-1 to an OS thread, which is a limited resource, and it is also tied to a static stack space within the JVM.

- **Introduction to Virtual Threads**
Virtual threads are a relatively newer type of thread that has been introduced as part of **JDK 19**.
Like platform threads, virtual threads contain, among other things, the code we want to execute concurrently, and the start method. However, unlike a platform thread, a virtual thread fully belongs and is managed by the JVM and does not come with a fixed-size stack.

The OS takes no role in creating or managing it and is not even aware of it. In fact, a virtual thread is just like any Java object allocated on the heap and can be reclaimed by the JVM's garbage collection when it is no longer needed. The consequence of those facts is that unlike platform threads, which are very expensive to create and heavy to manage, virtual threads are very cheap and fast to create in large quantities.
Now, a good question you may ask at this point is: if virtual threads are just Java objects, how do they actually run on the CPU?
The answer is, as soon as we create at least one virtual thread, under the hood, the JVM creates a relatively small internal pool of platform threads. Whenever the JVM wants to run a particular virtual thread, for example, thread A, it mounts it on one of the platform threads within its pool.
When a virtual thread is mounted on a platform thread, that platform thread is called a carrier thread. If the virtual thread finishes its execution, the JVM will unmount that thread from its carrier and make that platform thread available for other virtual threads. That virtual thread object now becomes garbage, which the garbage collection can clean up at any time. However, in certain situations, if thread A has not finished but is unable to make any progress at that time, the JVM will unmount it but save its current state on the heap.

It's worth pointing out that we as developers have very little control over the carrier threads and the scheduling of the virtual threads on them. It is something that the JVM manages for us under the hood.
- **Quick Demo**
For demonstration purposes, we will create a Java thread to see the difference between a virtual thread and a platform thread.
- Platform Thread (Java Thread)
Inside your IDE, create a class with a main method, like this:
```java
public class ThreadDemo {
public static void main(String[] args) {
Thread thread = new Thread(() -> {
System.out.println("Thread: " + Thread.currentThread());
});
thread.start();
}
}
```
So, let's create a virtual thread:
```java
public class VirtualThreadDemo {
public static void main(String[] args) {
Thread.startVirtualThread(() -> {
System.out.println("Virtual Thread: " + Thread.currentThread());
});
}
}
```
As you can see when we run the program, the first thing we notice is the object we printed is of type VirtualThread, then we see that its ID is 24, and the name is "ForkJoinPool.commonPool-worker-1". This tells us a few things. First, it tells us that to schedule this and any future virtual threads, the JVM created an internal thread pool of platform threads, which is called ForkJoinPool.commonPool, and then the JVM mounted our virtual thread on one of those worker threads, which is called worker-1.
To make this easier to understand, let's create another virtual thread:
```java
public class VirtualThreadDemo {
public static void main(String[] args) {
Thread.startVirtualThread(() -> {
System.out.println("Virtual Thread 1: " + Thread.currentThread());
});
Thread.startVirtualThread(() -> {
System.out.println("Virtual Thread 2: " + Thread.currentThread());
});
}
}
```
As you can see, we have two virtual threads, their IDs are 24 and 25, respectively. They ran on the same pool of carrier threads, which is called ForkJoinPool.commonPool, but because we ran them concurrently, each one was mounted on a different worker thread to be its carrier. The first one was mounted on worker-1, and the second on worker-2.
Now, to see the relationship between the number of virtual threads and the number of platform threads within that ForkJoinPool, let's increase the number of virtual threads from 2 to 20:
```java
public class VirtualThreadDemo {
public static void main(String[] args) {
for (int i = 0; i < 20; i++) {
Thread.startVirtualThread(() -> {
System.out.println("Virtual Thread " + (i + 1) + ": " + Thread.currentThread());
});
}
}
}
```
As you can see on the screen, we indeed created 20 new virtual threads, each with its own unique ID. However, based on their names, we can see that the JVM dynamically decided to create a pool of seven platform threads to be their carriers, and all those virtual threads were scheduled to run on this small pool of threads.
- **Conclusion**
I hope this blog is helpful to you, and I hope you enjoy it. | elayachiabdelmajid |
1,868,969 | How to setup your own mail server. | Well, recently I have been asked to explain more about running your own mail server. Before going... | 0 | 2024-05-29T12:37:03 | https://dev.to/kloudino/how-to-setup-your-own-mail-server-3k41 | Well, recently I have been asked to explain more about running your own mail server.
Before going further I have to tell you this one is not an easy one for multiple reasons.
Your mail server has to pass many qualification to be qualified as a known mail server for other mail servers like Gmail, outlook and many more.
One of the most important factors is your IP, first of all your IP should not be back listed. There are many sites out there you can use to check if your IP is black listed; like this one.
You might be wondering why an IP is black listed, the answer is simple, that IP has been abused by someone else, and it’s now black listed.
The best option is to ask your provider, to pass a brand new IP to you, to make sure that IP is not black listed.
Another important factor is your DNS settings.
You have to go though many DNS records such as DMARC, DKIM and many more records.
Which needs to be configure and set one by one which I personally found it difficult and time consuming.
Otherwise almost all your emails either will not get delivered, or will be delivered to spam box not the actual inbox.
Apart from these, you need to install a couple of packages and configure it manually which is also time consuming.
There are many solutions to setup a mail server, but the most common one is:
Postfix: A mail transfer agent (MTA) that routes and delivers email. It’s lightweight, fast, and configurable.
Dovecot: Provides IMAP and POP3, which allow mail clients to read emails from the server.
MySQL: Stores information like domains, virtual users, passwords, and mail aliases.
ViMbAdmin: A web interface for mailbox administration that lets you add and remove domains and mail users.
Amavis: A content filter that checks emails for spam and viruses using other packages.
ClamAV: A free antivirus for Linux.
Spamassassin: A spam filter.
Sieve: A mail filtering language.
RoundCube: A webmail interface for mail users.
Nginx: A web server for ViMbAdmin and RoundCube
Well I don’t Recommend you to go though all these confirmations, unless you have to for some reasons. But what to do?
- One solution is to go with a cloud mail server and just buy it from a mail provider for a couple of dollars per month.
- Another solution is to use mailinabox script done by Joshua Tauberer. This is an awesome all in one solution, you just install an OS, run this scripts and that’s it.
Link to its YouTube video
It will install and configure all packages you need for a mail server. Also has a great admin panel in which you can login and manage your domains, DNS settings and users.
It will install nextcloud on that server, but there is an option to pass into that script in order to skip the nextcloud installation.
The downside of this mail server which I don’t Like is that, you can not use that server for any other purposes and it has to be dedicated to your mail server.
Note: once you are done with the mail server, try to use this website and see your mail score.
Browsing this link will return an email address, you send a fake email address to this, then you come back to this website and see your score.
If you get higher than 9, most probably you are good to go and 90 percentage of your emails will deliver to inbox not spam box.
From. My own experience;
What ever you do, when it come to send emails to outlook email address , your email will drop into spam box anyway, disregarding how good and professional your private mail server is.
| kloudino | |
1,868,966 | Core Concepts of AI | Machine Learning Machine learning (ML) is a part of artificial intelligence (AI) and... | 0 | 2024-05-29T12:34:35 | https://dev.to/mohbohlahji/core-concepts-of-ai-g03 | machinelearning, robotics, ai, computervision | ## Machine Learning
Machine learning (ML) is a part of artificial intelligence (AI) and computer science. The main goal is to develop statistical algorithms. These algorithms help computers learn from data. They also help computers adjust to new information. Finally, they help computers perform better without specific programming.

ML involves training computer systems with many examples to understand problem-solving and predictive abilities, like how humans learn. The goal is to improve the system's accuracy over time.
1. ***Supervised Learning**: In supervised learning, algorithms learn from labeled datasets. Each data point has a tag showing what it is or what it leads to. This method uses a preset "answer key" to teach algorithms how to understand data. Supervised learning involves making predictions and classifications. Supervised learning algorithms adjust their settings based on the labeled data provided to them. This helps them make accurate predictions based on new, unseen data.
2. **Unsupervised Learning**: In unsupervised learning, algorithms work without labels on datasets. They look for patterns and structures within the data without direct guidance. These algorithms explore the data, grouping similar data points or uncovering hidden relationships. They find patterns and insights from large, messy sets of data without needing prior knowledge of the process of sorting the data.
3. **Reinforcement Learning**: Algorithms learn the best actions by practicing reinforcement learning. They interact with their environment and get feedback in the form of rewards or penalties based on what they do. Unlike guided learning, where data needs labels, reward-based learning doesn't need labeled data. Instead, algorithms learn by trial and error to maximize total rewards over time. We use this method in situations where decisions happen in a certain order, like in games, robots, and when guiding self-driving cars.
## Neural Networks.
Neural networks, also known as ANNs, are computer models inspired by the human brain. They consist of interconnected nodes arranged in layers: input, hidden, and output. Nodes process input data by considering their importance (weights) and a certain threshold. They become active if the result is higher than a set limit. These networks improve with training data, getting better at tasks like recognizing patterns. Through simulation and training, they adjust and improve their responses.
Neural networks help with tasks like Google searches and talking to your phone. They make finding stuff and understanding what you say faster and better than if people did it by hand. They play a crucial role in machine learning and deep learning, helping with fast data analysis and problem-solving.

## Basic Structure and Functionality of Neural Networks
An Artificial Neural Network (ANN) has three main parts. Its basic structure includes:
1.**Input Layer**
- Receives the input data in the form of vectors.
- Contains neurons corresponding to the features of the input data.
- Each neuron stands for a different feature. The number of neurons in this layer matches the dimensions of the input data.
- The input layer sends the input data to the next layer. It doesn't do any computations itself.
2.**Hidden Layers**
- Intermediate layers between the input and output layers.
- Comprise many neurons.
- Each hidden layer performs computations on the input data using weights and biases.
- Activation functions apply to produce outputs that pass to the next layer.
- The number of hidden layers can change depending on how complicated the problem is.
The number of neurons in each hidden layer can also vary based on the network's design.
3.**Output Layer**
- The final layer of the neural network.
- It consists of neurons that produce the network's output.
- The number of neurons depends on the type of problem the network is solving.
- Apply activation functions to produce the final output of the network.
This basic structure lays the groundwork for ANNs. They can take on different shapes and setups depending on the problem at hand and how well they need to perform.
## Deep Learning
Deep learning is like a smart cousin of regular machine learning. We use deep neural networks, which function in our brains, to understand information from raw data. These networks excel at identifying patterns, organizing data into categories, and making forecasts. They earn the label "deep" due to their many layers collaborating to enhance results.
Deep learning powers many advanced technologies we rely on, such as self-driving cars. It also enables smart chatbots, like ChatGPT. Face recognition on your phone is another application. Deep learning helps in detecting medical issues too. It even assists devices in understanding spoken commands. It's like having super-smart helpers that can handle all sorts of tricky tasks.
Deep learning is enhancing functionality by empowering computers to think like humans. Its ability to learn and solve complex problems simplifies and adds excitement to life.
### Natural Language Processing (NLP)
Natural Language Processing (NLP) combines language with computers.
- It assists machines in grasping and generating human speech and text.
- It enables tasks like translation, voice commands, and summarizing text.
- It uses machine learning to analyze language data.
- Its goal is to help computers understand text better.
NLP helps digital assistants and business tools become smarter and work better. It is important for them.
## Computer Vision
Computer vision assists computers in understanding digital images and videos. It is a type of artificial intelligence. It uses cameras, data, and algorithms instead of human eyes and brains. Computers learn to recognize patterns in images by looking at lots of examples.
Different industries like energy, manufacturing, and automotive use computer vision. Computer vision helps with taking photos, changing images, understanding what's in a picture, and getting information from the real world.
## Robotics
Robotics involves combining science, engineering, and technology to make robots. People build, operate, and use these robots to do tasks in different industries. Robotics involves:
- Mechanical engineering.
- Computer science.
- Electrical engineering.
- Control systems.
- Software programming and other related fields.
The main goal of robotics is to
- create machines that can help people.
- perform tasks that are risky, monotonous, or unpleasant.
- enhance efficiency and precision, particularly in manufacturing.
With advancements in artificial intelligence, robots are becoming capable of handling complex situations.
Robotics involves many tasks. These include:
- Building mechanical parts
- Designing electrical components
- Writing software.
The goal is to make smart machines that can work in different places.
| mohbohlahji |
1,863,019 | Understanding Hash Tables: The Backbone of Efficient Data Storage | In the realm of computer science and programming, hash tables are indispensable tools that provide... | 0 | 2024-05-29T12:31:34 | https://dev.to/luisfpedroso/understanding-hash-tables-the-backbone-of-efficient-data-storage-25el | algorithms, computerscience, javascript, hashtable | In the realm of computer science and programming, hash tables are indispensable tools that provide efficient data storage and retrieval capabilities. Whether you're a seasoned developer or a beginner, understanding hash tables can significantly enhance your problem-solving toolkit. In this post, we will explore what exactly hash tables are, how they work, the mechanics of hashing, and their application. Let's get started
<img src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExc2Zwb3MwdW1pMjI3bHo2YWhtOTBqYmViZjMxaHdua2NhbGx1MXB1diZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/l0amJzVHIAfl7jMDos/giphy.gif" width="100%" height="100%" />
### What is a Hash Table?
Well, that's a good question, and as the long story short, a hash table is a data structure that maps keys to values for highly efficient lookup. Think of it as a digital version of a dictionary, where each word (key) maps to a definition (value).
The image below describes what a hash table looks like:
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qakn4m8aj7nywjytsozx.png" alt="Understanding Hash Tables: The Backbone of Efficient Data Storage - A general view of a HashTable" />
<figcaption >A general view of a HashTable</figcaption>
</figure>
From the image above, there are three important things to mention:
- The "hash" function: The "hash" function transforms keys, such as "ab" into 1234 and "cd" into 5678. This transformation maps the keys to specific indices in an array, enabling quick data retrieval.
- Mapping to indexes: The hash values are mapped to indices in the array using a modulo operation to ensure they fit within the array's bounds. For example, 1234 maps the key "ab" to index 4 because 1234 % 5 (the length of the hash table) equals 4.
- Dealing with collisions: To handle cases like the keys "ef" and "gh", or "cd", "ij", and "kl" being stored at the same index, linked lists are used to manage these situations, also called collisions. By storing multiple keys at the same index in a chain, the hash table allows efficient data retrieval even in the presence of collisions.
Let's take a look at these items separately.
#### Hash Function
A hash function is a crucial component of a hash table. It takes an input (the "key") and returns an integer, which is used as the index where the value associated with the key is stored. For instance, the key "ab" might be transformed into the index 1234, and "cd' into 5678.
An example of a simple hash function for strings is shown below:
```js
function hash(key, tableSize) {
let hash = 23;
for (let i = 0; i < key.length; i++) {
hash = (hash * key.charCodeAt(i)) % tableSize;
}
return hash;
}
```
This function iterates over each character in the input string, multiplying its Unicode value with the hash variable, and uses the modulo operation to ensure the resulting hash value fits within the bounds of the hash table's length. The use of a prime number (23) as the initial value helps create a more uniform distribution of hash values, reducing the likelihood of collisions in the hash table.
#### Collision Handling
A collision occurs when two keys hash to the same index, as in the case of the keys "ef" and "gh" in the image above. In situations like this, two techniques could be used to solve this problem: **separate chaining**, or **open addressing**.
- Separate Chaining: One of the most common methods for handling collisions. When a collision occurs, the key-value pairs that hash to the same index are stored in a linked list at that index.
- Open Addressing: This method involves finding another slot within the hash table for the colliding key. Techniques like `linear probing`, `quadratic probing`, and `double hashing` fall under this category.
### Advantages of Hash Tables
- Fast Data Access: Hash tables provide average-case constant time complexity, O(1), for both insertions and lookups, making them highly efficient.
- Flexibility: They can store a wide variety of data types and can dynamically resize to accommodate more entries.
### Applications of Hash Tables
There are a myriad of ways you could use Hash Table, the most common applications are:
- Databases: Hash tables are used in database indexing to quickly retrieve records.
- Caches: Acting as a fundamental piece of the implementation of caches for fast data retrieval.
- Sets and Maps in Programming Languages: Many programming languages use hash tables to implement sets and maps, such as Python's `dict` and `set`, and Java's `HashMap`.
### Conclusion
Hash tables are an example of efficient data storage and retrieval, due to their average-case constant time complexity for basic operations. By understanding the mechanisms of hashing and collision resolution, you can harness the power of hash tables to optimize your programs and solve complex problems with ease. Whether you are preparing for a coding interview or applying it to a project, hash tables are a fundamental tool that can make your data management tasks more efficient and effective.
To recap, remember that:
- A hash table is a data structure that maps keys to values.
- The average case of a hash table is constant time complexity, O(1), for both insertions and lookups.
- A hash function is a crucial component of a hash table.
- The use of a prime number helps create a more uniform distribution of hash values, reducing the likelihood of collisions in the hash table.
- The most common technique to deal with collisions is Separate Chaining.
- Hash tables are great for database indexing and cache.
### Resources
#### Tutorials
[How to Implement a Hash Table in JavaScript](https://www.youtube.com/watch?v=UOxTMOCTEZk): A YouTube video tutorial where Ben Awad explains in detail how to implement a Hash Table in Javascript.
#### Books
[Cracking the Coding Interview](https://www.crackingthecodinginterview.com/): 150 Programming Interview Questions and Solutions.
| luisfpedroso |
1,868,965 | Robust API Retry Mechanism with AWS Step Functions and Lambda | I have been working with external API calls for a while and have noticed they can sometimes fail for... | 0 | 2024-05-29T12:31:15 | https://dev.to/aws-builders/robust-api-retry-mechanism-with-aws-step-functions-and-lambda-4lap | aws, lambda, stepfunctions, softwaredevelopment | I have been working with external API calls for a while and have noticed they can sometimes fail for various reasons, such as network issues, server downtime, or rate limits on the server. So, I have built this solution to have a robust system to tackle this problem.
In this solution, we will leverage the AWS Step Function and Lambda Functions to construct a reliable retry mechanism. The State Machine will consist of a collection of Lambda functions invoked and stitched together to produce results. This article will walk you through the step-by-step guide.
## The main objective we are trying to solve:
While Step Functions inherently support retries within tasks, our specific challenge involves handling API rate limits from the server we are communicating with. The server imposes a rate limit and responds with a 429 status code if too many requests are made from the same IP address within a short period.
## Prerequisites
* AWS Account
* Basic understanding of AWS Lambda and Step Functions
---
## 1. Architecture:

### Workflow Explanation
1. **User Invokes Step Function State Machine**: The process begins when a user initiates the step function state machine. This could be triggered through an API call, a scheduled event, or another AWS service.
2. **Step Function Invokes Lambda (1st Attempt)**: The step function invokes the first Lambda function (Lambda 1). This Lambda function is responsible for making the API call.
3. **Response: Status**: Lambda 1 Executes the API call and returns a status response. This response indicates whether the API call was successful (e.g., status code 200) or failed (e.g., any status code other than 200).
4. **If Failure Status ≠ 200 (2nd Attempt)**: If the response from Lambda 1 If it indicates a failure (status code not equal to 200), the step function will proceed to invoke a retry mechanism. This could involve retrying the same Lambda function or invoking a different Lambda function (Lambda 2) to handle the retry attempt.
5. **Response: Status**: Lambda 2 It attempts to execute the API call and returns a status response. Similar to the first Attempt, this response will indicate whether the retry was successful.
6. **If Success Status = 200**: If either Lambda 1 or Lambda 2 Successfully executes the API call and returns a status code of 200, the step function completes successfully, and the user is notified of the success.
7. **If Failure Even After Retries**: Then we will fail the step function and forward the API error to the user with the appropriate status code.
To explain the architecture easily, I have created the above diagram with one retry only, but we will build the solution with two retries. Below is the state machine diagram.

---
## 2. Step-by-Step Guide
* **Create a base lambda function:**
This lambda function will help us in orchestrating the state machine. Executing the state machine and handling logic based on the execution status.
```python
import boto3
import json
import time
def start_state_machine(body):
# Create a session with AWS credentials
session = boto3.Session(
aws_access_key_id='',
aws_secret_access_key='',
region_name=''
)
# Create a client to interact with AWS Step Functions
step_functions_client = session.client('stepfunctions')
# Define the ARN of the Step Function that you want to start
state_machine_arn = 'arn:aws:states::stateMachine:apiProxyStateMachine'
# Define the input to pass to the Step Function
input_data = body
# Start the Step Function with the specified input
response = step_functions_client.start_execution(
stateMachineArn=state_machine_arn,
input=json.dumps(input_data)
)
# Wait for the execution to complete
while True:
execution_status = step_functions_client.describe_execution(
executionArn=response['executionArn']
)['status']
if execution_status in ['SUCCEEDED', 'FAILED', 'ABORTED']:
break
execution_output = step_functions_client.describe_execution(
executionArn=response['executionArn']
)
if(execution_output['status'] == 'SUCCEEDED'):
return execution_output['output']
else:
return execution_output['status']
def lambda_handler(event, context):
event = event["body"]
data = start_state_machine(event)
response = json.loads(data)
return {
"statusCode": response["statusCode"],
"body": response["body"]
}
```
* **Create a function URL for the lambda function:**
Now that the lambda function is ready, we can set up a function URL to trigger/send a request to the lambda function using it. Refer to the article below to turn any lambda function into an API with a function URL.
{% embed https://medium.com/technology-hits/how-to-use-aws-lambda-to-trigger-any-script-as-an-api-call-64f13d8b36e5 %}
* **Create child lambda functions:**
These will be simple lambda functions acting as a proxy; they will not handle any logic.
```python
import json
import requests
def lambda_handler(event, context):
api_url = "https://api.example.com/data"
try:
response = requests.get(api_url)
response.raise_for_status()
return {
'statusCode': 200,
'body': json.dumps(response.json())
}
except requests.exceptions.RequestException as e:
return {
'statusCode': response.status_code if response else 500,
'body': json.dumps({'error': str(e)})
}
```
We have to create the same three lambda functions using the above code.
* **Define Step Function State Machine:**
Next, we'll create a Step Functions state machine with a retry mechanism. Here is an example definition in JSON.
{% details Click to expand %}
```json
{
"Comment": "A description of my state machine",
"StartAt": "Proxy 1",
"States": {
"Proxy 1": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"Payload.$": "$",
"FunctionName": "arn:aws:lambda:ap-south-1:378343485419:function:apiProxy1:$LATEST"
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException",
"Lambda.TooManyRequestsException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Next": "Pass"
},
"Pass": {
"Type": "Pass",
"Next": "Choice"
},
"Choice": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.statusCode",
"NumericEquals": 200,
"Next": "Pass (1)"
}
],
"Default": "Pass (7)"
},
"Pass (7)": {
"Type": "Pass",
"Next": "Wait"
},
"Wait": {
"Type": "Wait",
"Seconds": 5,
"Next": "Proxy 2"
},
"Pass (1)": {
"Type": "Pass",
"Next": "Success - 1"
},
"Success - 1": {
"Type": "Succeed"
},
"Proxy 2": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"Payload.$": "$",
"FunctionName": "arn:aws:lambda:ap-south-1:378343485419:function:apiProxy2:$LATEST"
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException",
"Lambda.TooManyRequestsException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Next": "Pass (8)"
},
"Pass (8)": {
"Type": "Pass",
"Next": "Choice (1)"
},
"Choice (1)": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.statusCode",
"NumericEquals": 200,
"Next": "Pass (2)"
}
],
"Default": "Pass (3)"
},
"Pass (3)": {
"Type": "Pass",
"Next": "Proxy 3"
},
"Pass (2)": {
"Type": "Pass",
"Next": "Success -2"
},
"Success -2": {
"Type": "Succeed"
},
"Proxy 3": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"Payload.$": "$",
"FunctionName": "arn:aws:lambda:ap-south-1:378343485419:function:apiProxy3:$LATEST"
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException",
"Lambda.TooManyRequestsException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Next": "Pass (4)"
},
"Pass (4)": {
"Type": "Pass",
"Next": "Choice (2)"
},
"Choice (2)": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.statusCode",
"NumericEquals": 200,
"Next": "Pass (5)"
}
],
"Default": "Pass (6)"
},
"Pass (6)": {
"Type": "Pass",
"Next": "Failure"
},
"Pass (5)": {
"Type": "Pass",
"Next": "Success - 3"
},
"Success - 3": {
"Type": "Succeed"
},
"Failure": {
"Type": "Succeed"
}
}
}
```
{% enddetails %}
---
## 3. Testing the State Machine
Trigger the state machine execution using the first lambda function URL and monitor it through the AWS State Machine Console. You should see the retries and the final result, whether it succeeds or fails.

---
## Conclusion -
Implementing a robust API retry mechanism using AWS Step Functions and Lambda is a powerful way to enhance the reliability of your API integrations. I have worked too much with the vendor APIs, and their reliability is something you can not trust. They have rate limits, server IP-based wait times, and so on. This retry using different lambda functions will give us different server URLs, preventing IP-based wait time blocking plus the retry mechanism.
This solution provides a visual workflow to monitor and debug your API calls. With AWS Step Functions and Lambda, you can build a fault-tolerant API integration with minimal effort.
---
_Thanks for reading the tutorial. I hope you learn something new today. If you want to read more stories like this, I invite you to follow me.
Till then, Sayonara! I wish you the best in your learning journey._ | somilgupta |
1,868,963 | AI in Content Personalization: The Future of Customer Engagement | Growing digital content overwhelms consumers. AI-curated personalized content is the solution. AI... | 0 | 2024-05-29T12:26:18 | https://dev.to/saima_akhtar_833e72a48fa8/ai-in-content-personalization-the-future-of-customer-engagement-2noi | Growing digital content overwhelms consumers. AI-curated personalized content is the solution. AI analyzes your behaviors to deliver tailored recommendations and enhance your experience. The future is here. AI is revolutionizing content personalization across industries, from retail to media. Learn how AI-powered customization creates deeper engagement, improves satisfaction, and increases revenue. Discover how businesses partner with AI to personalize your journey like never before. The possibilities are endless when content becomes intelligent. Discover how AI drives digital marketing (https://www.safatechsolutions.com/), enabling companies to connect with you more.
**What Is AI in Content Personalization?**
AI personalizes content based on user interests and behavior. Predictive Analytics AI anticipates user preferences by applying predictive analytics. AI analyzes past interactions to tailor content to user preferences.
**Dynamic Content Matching**
AI then matches each user to the most relevant content in real-time. AI assesses new content to match user interests and preferences. Dynamic matching enables companies to serve the most relevant content to each user.
**Continuous Optimization**
AI learns and optimizes content personalization (https://web.facebook.com/digitalcooperativemarketing) based on user interactions with served content. User actions like clicks, shares, or comments help AI understand interests better. If a user ignores or doesn't engage with an article, AI adjusts to avoid similar content for that user in the future. This continuous feedback loop results in tailored content personalization.
**Benefits of AI-Powered Personalization**
[AI personalization] (https://www.safatechsolutions.com/) achieves ideal content relevance for each customer. Tailoring content to users interests boosts engagement, satisfaction, and brand loyalty. AI enables personalized experiences at scale for companies, catering to millions of users.AI paves the way for a new era of hyper-personalized digital experiences.
**How AI is Revolutionizing Content Personalization**
**Tailored Content**
AI analyzes extensive customer data to reveal insights into interests, preferences, and behaviors. Using this information, AI systems curate and tailor content to match individual customers. An e-commerce site may use AI to recommend products based on a customer’s history. AI might be used by a media outlet to provide readers with tailored news suggestions depending on their interests. With AI, content personalization can scale for thousands or millions of customers.
**Dynamic Optimization**
AI enables content personalization to be an ongoing, dynamic process. As customers interact, AI tracks to enhance personalization. Recommendations and tailored content improve over time based on the extra data. AI can also adjust to changes in customer interests or behaviors. If a customer shows interest in a new topic, AI recommends related content.
**Enhanced Experiences**
AI-powered content personalization enhances the customer experience. When customers receive tailored, relevant content and recommendations, they feel understood and valued. Strong personalization keeps customers engaged, leading to increased site and app usage. Customers may even come to rely on the personalized content curated for them. These types of meaningful engagements build brand loyalty and customer lifetime value.
For brands, AI provides a competitive edge with superior personalization and customer experiences. With AI, brands gain insights and drive metrics at an unprecedented scale. AI will revolutionize content personalization and customer engagement for years.
**Key Benefits of Using AI for Content Personalization**
As AI technology progresses, more companies are leveraging its capabilities for content personalization. Improved Customer Experience. AI can analyze customer data to determine individual interests, preferences, and needs. With this information, [AI] (https://web.facebook.com/safatechsolutions) tailors content to each customer, creating a personalized experience. Customers receive more relevant content, leading to higher engagement and satisfaction.
**Increased Conversions**: Relevant, targeted content leads to higher conversion rates. When customers receive tailored content, they're more likely to make a sale. AI enables personalized content at scale, providing customized experiences for each customer.
**Operational Efficiencies**: AI reduces the burden on marketing teams to personalize content. AI analyzes extensive customer data to uncover insights humans may miss. They use this information to generate personalized content for each customer. This allows marketing teams to focus their efforts on other high-impact activities.
Improved Analytics: AI uncovers hidden connections across data, providing deeper insights into customers. These insights allow companies to create targeted segments and personalization strategies. AI tracks content performance in real time, enabling rapid optimization. Companies observe what works for different customers and adapt.
While AI offers many content personalization opportunities, human oversight and judgment remain necessary. AI systems must be well-developed and trained to generate appropriate, high-quality content. Check them to ensure they are continuing to function as intended. Combined with human input, AI powers personalized, engaging experiences at scale. Human intelligence and ethics guide AI to maximize its potential.
Case Studies and Examples of AI in Content Personalization
**
AI-powered content personalization is enabling companies to deliver customized experiences at scale. Here are a few examples of brands leveraging AI for content personalization:
**Netflix’s Recommendation Engine**
One of the first systems to use AI to customize content is Netflix's recommendation engine. Netflix makes movie and television recommendations based on user tastes. The secret to Netflix's success has been its curating of individualized content.
**Spotify’s Discover Weekly Playlist**
Discover Weekly on Spotify generates customized playlists according to listener preferences. Spotify has become better at recommending music and artists that users would like as they listen more. Spotify's engagement and music discovery have increased, thanks to Discover Weekly.
**Amazon’s Product Recommendations**
Amazon uses AI for personalized product recommendations based on shopper history. Recommendations cover Amazon's catalog, optimized for likely purchases. These personalized recommendations drive an estimated 35% of Amazon’s revenue.
**Anthropic AI Safety Assistant**
Anthropic personalizes language models using constitutional AI for users. They design their AI assistant to be helpful, harmless, and honest for every customer. The models self-supervise to avoid undesirable behaviors. This novel approach helps address AI alignment on an individual level.
AI has enabled a new level of personalization, transforming customer experiences across industries. When implemented, personalized AI can benefit both companies and consumers. The future of AI in content personalization is an exciting area to watch.
**The Future of AI in Content Personalization: What’s Next?**
**Improved data collection and Analysis**
As AI advances, people will collect and analyze more data. AI will better understand their needs, interests, and behaviors with more customer data. This will enable personalized experiences that meet customer needs.
**Predictive Personalization**
AI is enabling a shift from reactive personalization to predictive personalization. Instead of reacting, AI can predict customer needs based on past behaviors. For example, if a customer buys mystery novels, AI may suggest a new one. Predictive personalization creates seamless customer experiences by offering relevant recommendations and content.
**Conversational Interfaces**
Advancements in natural language processing will pave the way for sophisticated conversational interfaces. Chat bots and voice assistants use past interactions for personalized, natural conversations. For example, a customer may ask an e-commerce chat bot for gift ideas for a friend. The chat bot uses customer data to suggest gifts based on their friend's interests. Conversational interfaces will make AI interactions second nature in daily life.
**Customized Content Creation**
AI will get smarter at generating customized content for individual customers. AI generates personalized content for customers using natural language. Content will evolve with AI, offering personalized sophistication through vast data leverage. Customized content creation will enable more impactful customer experiences at scale.
AI's future in content personalization means using more data to understand customers better. This enables predictive personalization, powers conversational interfaces, and generates customized content. As AI advances, personalization becomes even more crucial for engaging customers. The possibilities for enhanced customer experiences are endless.
**FAQs**
**What is AI in content personalization?**
AI in content personalization uses AI to tailor digital content to user interests. It predicts and matches content to enhance user experiences.
How does AI personalize content?
By examining user data such as previous interactions, preferences, and habits, AI may modify content. It makes predictions about user preferences and instantly pairs them with appropriate information. AI makes recommendations that are more individualized for content, depending on user interactions.
What are the benefits of AI-powered content personalization?
Content customization enabled by AI has several advantages, such as:
Improved customer experience through tailored content.
Increased conversions due to targeted recommendations.
Operational efficiencies by automating content customization.
Enhanced analytics for deeper insights into customer behavior.
How is AI revolutionizing content personalization?
AI transforms content personalization with tailored creation, dynamic optimization, and improved user experiences. It uses customer data to create personalized content at scale, boosting engagement.
Can you provide examples of AI in content personalization?
A few examples are the AI safety assistant from Anthropic, Netflix, Spotify's Discover Weekly, and Amazon. They provide individualized experiences using AI, depending on user behavior and preferences.
What does the future hold for AI in content personalization?
[AI's] (https://web.facebook.com/digitalcooperativemarketing) future in content personalization means smarter predictions, natural conversations, and custom content. As AI progresses, it will create smarter, more impactful, personalized experiences for users.
How can businesses leverage AI for content personalization?
Businesses can use AI for content personalization with advanced algorithms and extensive data. Clear goals, ethics, and ongoing adjustments shape the customer journey.
Customized material and suggestions are sent to users using AI-driven customization, which boosts engagement. AI technology may be leveraged by organizations to enhance consumer experiences and foster deeper engagement.
**Conclusion**
In conclusion, AI can revolutionize the customer experience through personalized content. With advanced algorithms and vast data, companies can personalize interactions. Approach the technology with clear business goals and ethical awareness. Moving forward, you can shape the customer journey. It requires strategy, care, and ongoing refinement as technology evolves.[AI-driven personalization] (https://www.safatechsolutions.com/) promises to deepen engagement, but only if harnessed. The future remains unwritten; craft it. | saima_akhtar_833e72a48fa8 | |
1,867,709 | How to build: a To-Do list app with an embedded AI copilot (Next.js, GPT4, & CopilotKit) | TL;DR A to-do list is a classic project for every dev. In today's world it is great to... | 0 | 2024-05-29T12:22:46 | https://dev.to/copilotkit/how-to-build-an-ai-powered-to-do-list-nextjs-gpt4-copilotkit-20i4 | webdev, programming, javascript, tutorial | ## **TL;DR**
A to-do list is a classic project for every dev. In today's world it is great to learn how to build with AI and to have some AI projects in your portfolio.
Today, I will go through step by step of how to build a to-do list with an embedded AI copilot for some AI magic 🪄.

We'll cover how to:
- Build the to-do list generator web app using Next.js, TypeScript, and Tailwind CSS.
- Use CopilotKit to integrate AI functionalities into the to-do list generator.
- Use AI chatbot to add lists, assign lists to someone, mark lists as completed, and delete lists.

---
## CopilotKit: The framework for building in-app AI copilots
CopilotKit is an [open-source AI copilot framework](https://github.com/CopilotKit/CopilotKit). We make it easy to integrate powerful AI into your React apps.
Build:
- ChatBot: Context-aware in-app chatbots that can take actions in-app 💬
- CopilotTextArea: AI-powered textFields with context-aware autocomplete & insertions 📝
- Co-Agents: In-app AI agents that can interact with your app & users 🤖

{% cta https://go.copilotkit.ai/bonnie %} Star CopilotKit ⭐️ {% endcta %}
---
## Prerequisites
To fully understand this tutorial, you need to have a basic understanding of React or Next.js.
Here are the tools required to build the AI-powered to-do list generator:
- [Nanoid](https://github.com/ai/nanoid) - a tiny, secure, URL-friendly, unique string ID generator for JavaScript.
- [OpenAI API](https://platform.openai.com/api-keys) - provides an API key that enables you to carry out various tasks using ChatGPT models.
- [CopilotKit](https://github.com/CopilotKit) - an open-source copilot framework for building custom AI chatbots, in-app AI agents, and text areas.
## Project Set up and Package Installation
First, create a Next.js application by running the code snippet below in your terminal:
```tsx
npx create-next-app@latest todolistgenerator
```
Select your preferred configuration settings. For this tutorial, we'll be using TypeScript and Next.js App Router.

Next, install Nanoid package and its dependancies.
```tsx
npm i nanoid
```
Finally, install the CopilotKit packages. These packages enable us to retrieve data from the React state and add AI copilot to the application.
```tsx
npm install @copilotkit/react-ui @copilotkit/react-textarea @copilotkit/react-core @copilotkit/backend @copilotkit/shared
```
Congratulations! You're now ready to build an AI-powered to-do list generator.
## **Building The To-Do List Generator Frontend**
In this section, I will walk you through the process of creating the to-do list generator frontend with static content to define the generator’s user interface.
To get started, go to `/[root]/src/app` in your code editor and create a folder called `types`. Inside the types folder, create a file named `todo.ts` and add the following code that defines a TypeScript interface called **`Todo`.**
The **`Todo`** interface defines an object structure where every todo item must have an **`id`**, **`text`**, and **`isCompleted`** status, while it may optionally have an **`assignedTo`** property.
```tsx
export interface Todo {
id: string;
text: string;
isCompleted: boolean;
assignedTo?: string;
}
```
Then go to `/[root]/src/app` in your code editor and create a folder called `components`. Inside the components folder, create three files named `Header.tsx`, `TodoList.tsx` and `TodoItem.tsx` .
In the `Header.tsx` file, add the following code that defines a functional component named `Header` that will render the generator’s navbar.
```tsx
import Link from "next/link";
export default function Header() {
return (
<>
<header className="flex flex-wrap sm:justify-start sm:flex-nowrap z-50 w-full bg-gray-800 border-b border-gray-200 text-sm py-3 sm:py-0 ">
<nav
className="relative max-w-7xl w-full mx-auto px-4 sm:flex sm:items-center sm:justify-between sm:px-6 lg:px-8"
aria-label="Global">
<div className="flex items-center justify-between">
<Link
className="w-full flex-none text-xl text-white font-semibold p-6"
href="/"
aria-label="Brand">
To-Do List Generator
</Link>
</div>
</nav>
</header>
</>
);
}
```
In the `TodoItem.tsx` file, add the following code that defines a React functional component called **`TodoItem`**. It uses TypeScript to ensure type safety and to define the props that the component accepts.
```tsx
import { Todo } from "../types/todo"; // Importing the Todo type from a types file
// Defining the interface for the props that the TodoItem component will receive
interface TodoItemProps {
todo: Todo; // A single todo item
toggleComplete: (id: string) => void; // Function to toggle the completion status of a todo
deleteTodo: (id: string) => void; // Function to delete a todo
assignPerson: (id: string, person: string | null) => void; // Function to assign a person to a todo
hasBorder?: boolean; // Optional prop to determine if the item should have a border
}
// Defining the TodoItem component as a functional component with the specified props
export const TodoItem: React.FC<TodoItemProps> = ({
todo,
toggleComplete,
deleteTodo,
assignPerson,
hasBorder,
}) => {
return (
<div
className={
"flex items-center justify-between px-4 py-2 group" +
(hasBorder ? " border-b" : "") // Conditionally adding a border class if hasBorder is true
}>
<div className="flex items-center">
<input
className="h-5 w-5 text-blue-500"
type="checkbox"
checked={todo.isCompleted} // Checkbox is checked if the todo is completed
onChange={() => toggleComplete(todo.id)} // Toggle completion status on change
/>
<span
className={`ml-2 text-sm text-white ${
todo.isCompleted ? "text-gray-500 line-through" : "text-gray-900" // Apply different styles if the todo is completed
}`}>
{todo.assignedTo && (
<span className="border rounded-md text-xs py-[2px] px-1 mr-2 border-purple-700 uppercase bg-purple-400 text-black font-medium">
{todo.assignedTo} {/* Display the assigned person's name if available */}
</span>
)}
{todo.text} {/* Display the todo text */}
</span>
</div>
<div>
<button
onClick={() => deleteTodo(todo.id)} // Delete the todo on button click
className="text-red-500 opacity-0 group-hover:opacity-100 transition-opacity duration-200">
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
className="w-5 h-5">
<path
strokeLinecap="round"
strokeLinejoin="round"
d="m14.74 9-.346 9m-4.788 0L9.26 9m9.968-3.21c.342.052.682.107 1.022.166m-1.022-.165L18.16 19.673a2.25 2.25 0 0 1-2.244 2.077H8.084a2.25 2.25 0 0 1-2.244-2.077L4.772 5.79m14.456 0a48.108 48.108 0 0 0-3.478-.397m-12 .562c.34-.059.68-.114 1.022-.165m0 0a48.11 48.11 0 0 1 3.478-.397m7.5 0v-.916c0-1.18-.91-2.164-2.09-2.201a51.964 51.964 0 0 0-3.32 0c-1.18.037-2.09 1.022-2.09 2.201v.916m7.5 0a48.667 48.667 0 0 0-7.5 0"
/>
</svg>
</button>
<button
onClick={() => {
const name = prompt("Assign person to this task:");
assignPerson(todo.id, name);
}}
className="ml-2 text-blue-500 opacity-0 group-hover:opacity-100 transition-opacity duration-200">
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
className="w-5 h-5">
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M18 7.5v3m0 0v3m0-3h3m-3 0h-3m-2.25-4.125a3.375 3.375 0 1 1-6.75 0 3.375 3.375 0 0 1 6.75 0ZM3 19.235v-.11a6.375 6.375 0 0 1 12.75 0v.109A12.318 12.318 0 0 1 9.374 21c-2.331 0-4.512-.645-6.374-1.766Z"
/>
</svg>
</button>
</div>
</div>
);
};
```
In the `TodoList.tsx` file, add the following code that defines a React functional component named **`TodoList`**. This component is used to manage and display a list of to-do items.
```tsx
"use client";
import { TodoItem } from "./TodoItem"; // Importing the TodoItem component
import { nanoid } from "nanoid"; // Importing the nanoid library for generating unique IDs
import { useState } from "react"; // Importing the useState hook from React
import { Todo } from "../types/todo"; // Importing the Todo type
// Defining the TodoList component as a functional component
export const TodoList: React.FC = () => {
// State to hold the list of todos
const [todos, setTodos] = useState<Todo[]>([]);
// State to hold the current input value
const [input, setInput] = useState("");
// Function to add a new todo
const addTodo = () => {
if (input.trim() !== "") {
// Check if the input is not empty
const newTodo: Todo = {
id: nanoid(), // Generate a unique ID for the new todo
text: input.trim(), // Trim the input text
isCompleted: false, // Set the initial completion status to false
};
setTodos([...todos, newTodo]); // Add the new todo to the list
setInput(""); // Clear the input field
}
};
// Function to handle key press events
const handleKeyPress = (e: React.KeyboardEvent) => {
if (e.key === "Enter") {
// Check if the Enter key was pressed
addTodo(); // Add the todo
}
};
// Function to toggle the completion status of a todo
const toggleComplete = (id: string) => {
setTodos(
todos.map((todo) =>
todo.id === id ? { ...todo, isCompleted: !todo.isCompleted } : todo
)
);
};
// Function to delete a todo
const deleteTodo = (id: string) => {
setTodos(todos.filter((todo) => todo.id !== id));
};
// Function to assign a person to a todo
const assignPerson = (id: string, person: string | null) => {
setTodos(
todos.map((todo) =>
todo.id === id
? { ...todo, assignedTo: person ? person : undefined }
: todo
)
);
};
return (
<div>
<div className="flex mb-4">
<input
className="border rounded-md p-2 flex-1 mr-2"
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={handleKeyPress} // Add this to handle the Enter key press
/>
<button
className="bg-blue-500 rounded-md p-2 text-white"
onClick={addTodo}>
Add Todo
</button>
</div>
{todos.length > 0 && ( // Check if there are any todos
<div className="border rounded-lg">
{todos.map((todo, index) => (
<TodoItem
key={todo.id} // Unique key for each todo item
todo={todo} // Pass the todo object as a prop
toggleComplete={toggleComplete} // Pass the toggleComplete function as a prop
deleteTodo={deleteTodo} // Pass the deleteTodo function as a prop
assignPerson={assignPerson} // Pass the assignPerson function as a prop
hasBorder={index !== todos.length - 1} // Conditionally add a border to all but the last item
/>
))}
</div>
)}
</div>
);
};
```
Next, go to `/[root]/src/page.tsx` file, and add the following code that imports `TodoList` and `Header` components and defines a functional component named `Home`.
```tsx
import Header from "./components/Header";
import { TodoList } from "./components/TodoList";
export default function Home() {
return (
<>
<Header />
<div className="border rounded-md max-w-2xl mx-auto p-4 mt-4">
<h1 className="text-2xl text-white font-bold mb-4">
Create a to-do list
</h1>
<TodoList />
</div>
</>
);
}
```
Next, remove the CSS code in the `globals.css` file and add the following CSS code.
```css
@tailwind base;
@tailwind components;
@tailwind utilities;
body {
height: 100vh;
background-color: rgb(16, 23, 42);
}
```
Finally, run the command `npm run dev` on the command line and then navigate to http://localhost:3000/.
Now you should view the To-Do List generator frontend on your browser, as shown below.

## **Integrating AI Functionalities To The Todo List Generator Using CopilotKit**
In this section, you will learn how to add an AI copilot to the To-Do List generator to generate lists using CopilotKit.
CopilotKit offers both frontend and [backend](https://docs.copilotkit.ai/getting-started/quickstart-backend) packages. They enable you to plug into the React states and process application data on the backend using AI agents.
First, let's add the CopilotKit React components to the To-Do List generator frontend.
### **Adding CopilotKit to the To-Do List Generator Frontend**
Here, I will walk you through the process of integrating the To-Do List generator with the CopilotKit frontend to facilitate lists generation.
To get started, use the code snippet below to import `useCopilotReadable`, and `useCopilotAction`, custom hooks at the top of the `/[root]/src/app/components/TodoList.tsx` file.
```tsx
import { useCopilotAction, useCopilotReadable } from "@copilotkit/react-core";
```
Inside the `TodoList` function, below the state variables, add the following code that uses the `useCopilotReadable` hook to add the to-do lists that will be generated as context for the in-app chatbot. The hook makes the to-do lists readable to the copilot.
```tsx
useCopilotReadable({
description: "The user's todo list.",
value: todos,
});
```
Below the code above, add the following code that uses the `useCopilotAction` hook to set up an action called `updateTodoList` which will enable the generation of to-do lists.
The action takes one parameter called items which enables the generation of todo lists and contains a handler function that generates todo lists based on a given prompt.
Inside the handler function, `todos` state is updated with the newly generated todo list, as shown below.
```tsx
// Define the "updateTodoList" action using the useCopilotAction function
useCopilotAction({
// Name of the action
name: "updateTodoList",
// Description of what the action does
description: "Update the users todo list",
// Define the parameters that the action accepts
parameters: [
{
// The name of the parameter
name: "items",
// The type of the parameter, an array of objects
type: "object[]",
// Description of the parameter
description: "The new and updated todo list items.",
// Define the attributes of each object in the items array
attributes: [
{
// The id of the todo item
name: "id",
type: "string",
description:
"The id of the todo item. When creating a new todo item, just make up a new id.",
},
{
// The text of the todo item
name: "text",
type: "string",
description: "The text of the todo item.",
},
{
// The completion status of the todo item
name: "isCompleted",
type: "boolean",
description: "The completion status of the todo item.",
},
{
// The person assigned to the todo item
name: "assignedTo",
type: "string",
description:
"The person assigned to the todo item. If you don't know, assign it to 'YOU'.",
// This attribute is required
required: true,
},
],
},
],
// Define the handler function that executes when the action is invoked
handler: ({ items }) => {
// Log the items to the console for debugging purposes
console.log(items);
// Create a copy of the existing todos array
const newTodos = [...todos];
// Iterate over each item in the items array
for (const item of items) {
// Find the index of the existing todo item with the same id
const existingItemIndex = newTodos.findIndex(
(todo) => todo.id === item.id
);
// If an existing item is found, update it
if (existingItemIndex !== -1) {
newTodos[existingItemIndex] = item;
}
// If no existing item is found, add the new item to the newTodos array
else {
newTodos.push(item);
}
}
// Update the state with the new todos array
setTodos(newTodos);
},
// Provide feedback or a message while the action is processing
render: "Updating the todo list...",
});
```
Below the code above, add the following code that uses the `useCopilotAction` hook to set up an action called `deleteTodo` which enables you to delete a to-do item.
The action takes a parameter called id which enables you to delete a todo item by id and contains a handler function that updates the todos state by filtering out the deleted todo item with the given id.
```tsx
// Define the "deleteTodo" action using the useCopilotAction function
useCopilotAction({
// Name of the action
name: "deleteTodo",
// Description of what the action does
description: "Delete a todo item",
// Define the parameters that the action accepts
parameters: [
{
// The name of the parameter
name: "id",
// The type of the parameter, a string
type: "string",
// Description of the parameter
description: "The id of the todo item to delete.",
},
],
// Define the handler function that executes when the action is invoked
handler: ({ id }) => {
// Update the state by filtering out the todo item with the given id
setTodos(todos.filter((todo) => todo.id !== id));
},
// Provide feedback or a message while the action is processing
render: "Deleting a todo item...",
});
```
After that, go to `/[root]/src/app/page.tsx` file and import CopilotKit frontend packages and styles at the top using the code below.
```tsx
import { CopilotKit } from "@copilotkit/react-core";
import { CopilotPopup } from "@copilotkit/react-ui";
import "@copilotkit/react-ui/styles.css";
```
Then use `CopilotKit` to wrap the `CopilotPopup` and `TodoList` components, as shown below. The `CopilotKit` component specifies the URL for CopilotKit's backend endpoint (`/api/copilotkit/`) while the `CopilotPopup` renders the in-app chatbot that you can give prompts to generate todo lists.
```tsx
export default function Home() {
return (
<>
<Header />
<div className="border rounded-md max-w-2xl mx-auto p-4 mt-4">
<h1 className="text-2xl text-white font-bold mb-4">
Create a to-do list
</h1>
<CopilotKit runtimeUrl="/api/copilotkit">
<TodoList />
<CopilotPopup
instructions={
"Help the user manage a todo list. If the user provides a high level goal, " +
"break it down into a few specific tasks and add them to the list"
}
defaultOpen={true}
labels={{
title: "Todo List Copilot",
initial: "Hi you! 👋 I can help you manage your todo list.",
}}
clickOutsideToClose={false}
/>
</CopilotKit>
</div>
</>
);
}
```
After that, run the development server and navigate to [http://localhost:3000](http://localhost:3000/). You should see that the in-app chatbot was integrated into the todo list generator.

### **Adding CopilotKit Backend to the Blog**
Here, I will walk you through the process of integrating the todo lists generator with the CopilotKit backend that handles requests from frontend, and provides function calling and various LLM backends such as GPT.
To get started, create a file called `.env.local` in the root directory. Then add the environment variable below in the file that holds your `ChatGPT` API keys.
```jsx
OPENAI_API_KEY="Your ChatGPT API key”
```
To get the ChatGPT API key, navigate to https://platform.openai.com/api-keys.

After that, go to `/[root]/src/app` and create a folder called `api`. In the `api` folder, create a folder called `copilotkit`.
In the `copilotkit` folder, create a file called `route.ts` that contains code that sets up a backend functionality to process POST requests.
```tsx
// Import the necessary modules from the "@copilotkit/backend" package
import { CopilotRuntime, OpenAIAdapter } from "@copilotkit/backend";
// Define an asynchronous function to handle POST requests
export async function POST(req: Request): Promise<Response> {
// Create a new instance of CopilotRuntime
const copilotKit = new CopilotRuntime({});
// Use the copilotKit to generate a response using the OpenAIAdapter
// Pass the incoming request (req) and a new instance of OpenAIAdapter to the response method
return copilotKit.response(req, new OpenAIAdapter());
}
```
## How To Generate Todo Lists
Now go to the in-app chatbot you integrated earlier and give it a prompt like, “I want to go to the gym to do a full body workout. add to the list workout routine I should follow”
Once it is done generating, you should see the list of full-body workout routine you should follow, as shown below.

You can assign the to-do list to someone by giving the chatbot a prompt like, “assign the to-do list to Doe.”

You can mark the to-do list as completed by giving the chatbot a prompt like, “mark the to-do list as completed.”

You can delete the to-do list by giving the chatbot a prompt like, “delete the todo list.”

Congratulations! You’ve completed the project for this tutorial.
## Conclusion
[CopilotKit](https://copilotkit.ai/) is an incredible tool that allows you to add AI Copilots to your products within minutes. Whether you're interested in AI chatbots and assistants or automating complex tasks, CopilotKit makes it easy.
If you need to build an AI product or integrate an AI tool into your software applications, you should consider CopilotKit.
You can find the source code for this tutorial on GitHub: https://github.com/TheGreatBonnie/AIpoweredToDoListGenerator | the_greatbonnie |
1,868,953 | Scroll progress animations in CSS.🚀 | Crafting a Dynamic Scroll-Tracking Blog Post with HTML and CSS Creating a visually... | 0 | 2024-05-29T12:19:51 | https://dev.to/dharamgfx/scroll-progress-animations-in-css-4h0g | webdev, beginners, design, css | ### Crafting a Dynamic Scroll-Tracking Blog Post with HTML and CSS
Creating a visually engaging blog post that also provides user feedback through dynamic elements like a progress bar can significantly enhance the reading experience. Here’s a breakdown of how you can achieve this using HTML and CSS.
#### 1. The HTML Structure
```html
<div class="progress"></div>
<div class="container">
<h1>Anonymous scroll timeline</h1>
<p>
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Commodo viverra maecenas
accumsan lacus. Orci sagittis eu volutpat odio facilisis mauris. Eu nisl
nunc mi ipsum faucibus vitae aliquet nec. Amet nisl purus in mollis nunc
sed. Egestas tellus rutrum tellus pellentesque eu tincidunt tortor aliquam.
Lorem sed risus ultricies tristique nulla. Commodo sed egestas egestas
fringilla phasellus faucibus. Semper eget duis at tellus at urna condimentum
mattis pellentesque. Porta lorem mollis aliquam ut porttitor leo a diam. At
lectus urna duis convallis convallis tellus id interdum velit. Placerat orci
nulla pellentesque dignissim enim sit amet venenatis urna. Rutrum tellus
pellentesque eu tincidunt tortor. Nulla facilisi cras fermentum odio eu
feugiat. Aliquet risus feugiat in ante metus. Quis imperdiet massa tincidunt
nunc pulvinar sapien et. Vel pharetra vel turpis nunc.
</p>
<p>
Potenti nullam ac tortor vitae purus. Tempor orci dapibus ultrices in
iaculis nunc sed augue. Adipiscing elit duis tristique sollicitudin nibh.
Luctus accumsan tortor posuere ac ut consequat semper. Enim nulla aliquet
porttitor lacus. Netus et malesuada fames ac. Aliquam ultrices sagittis orci
a scelerisque. Fringilla phasellus faucibus scelerisque eleifend donec
pretium vulputate sapien. Nibh praesent tristique magna sit amet purus
gravida quis. Mi proin sed libero enim sed faucibus turpis in eu. Natoque
penatibus et magnis dis parturient montes nascetur ridiculus. Pellentesque
elit ullamcorper dignissim cras tincidunt lobortis. Nunc faucibus a
pellentesque sit amet porttitor eget dolor. Luctus accumsan tortor posuere
ac ut. Et molestie ac feugiat sed lectus vestibulum mattis ullamcorper
velit. Ac odio tempor orci dapibus ultrices in iaculis nunc sed.
</p>
<p>
Molestie ac feugiat sed lectus vestibulum mattis. Elementum curabitur vitae
nunc sed velit dignissim sodales ut. Netus et malesuada fames ac turpis
egestas sed tempus. Viverra nam libero justo laoreet sit amet cursus sit
amet. Maecenas sed enim ut sem viverra aliquet eget. Et netus et malesuada
fames ac turpis egestas maecenas pharetra. Imperdiet proin fermentum leo vel
orci porta. Nunc eget lorem dolor sed viverra ipsum nunc aliquet. Facilisis
mauris sit amet massa vitae. Cras semper auctor neque vitae. Adipiscing diam
donec adipiscing tristique risus. Scelerisque eu ultrices vitae auctor eu.
Adipiscing vitae proin sagittis nisl rhoncus mattis rhoncus urna. Egestas
quis ipsum suspendisse ultrices gravida. Semper quis lectus nulla at
volutpat diam. Egestas congue quisque egestas diam in arcu.
</p>
<p>
Est velit egestas dui id ornare arcu odio ut sem. Tortor consequat id porta
nibh venenatis. Proin sagittis nisl rhoncus mattis rhoncus urna neque. Porta
non pulvinar neque laoreet suspendisse interdum. Lacus vel facilisis
volutpat est velit egestas dui. Facilisi morbi tempus iaculis urna id
volutpat. Venenatis urna cursus eget nunc scelerisque viverra. Ultrices
gravida dictum fusce ut. Eu augue ut lectus arcu. Orci dapibus ultrices in
iaculis. Rhoncus mattis rhoncus urna neque viverra justo nec ultrices. Odio
eu feugiat pretium nibh ipsum consequat. Accumsan in nisl nisi scelerisque
eu ultrices vitae. Nunc faucibus a pellentesque sit. Ultricies integer quis
auctor elit sed vulputate mi. Nulla aliquet enim tortor at auctor urna nunc
id cursus.
</p>
<p>
Integer enim neque volutpat ac tincidunt vitae semper. Condimentum lacinia
quis vel eros donec ac odio tempor orci. Imperdiet dui accumsan sit amet
nulla facilisi morbi tempus. Suspendisse potenti nullam ac tortor vitae. Non
sodales neque sodales ut. Elementum eu facilisis sed odio. Aliquet nec
ullamcorper sit amet risus nullam eget felis eget. Diam phasellus vestibulum
lorem sed risus ultricies tristique. Facilisis sed odio morbi quis. Diam
quis enim lobortis scelerisque fermentum dui faucibus. Ullamcorper dignissim
cras tincidunt lobortis feugiat vivamus at augue eget. Platea dictumst
vestibulum rhoncus est pellentesque elit ullamcorper dignissim.
</p>
</div>
```
##### HTML Explanation
1. **Progress Bar Div**:
```html
<div class="progress"></div>
```
- This creates a fixed position progress bar at the top of the viewport.
2. **Container Div**:
```html
<div class="container">
```
- This contains the main content of the blog post.
3. **Heading and Paragraphs**:
```html
<h1>Anonymous scroll timeline</h1>
<p> ... </p>
```
- The heading introduces the blog post, and multiple paragraphs provide the content.
#### 2. The CSS Styling
```css
* {
box-sizing: border-box;
}
body {
font-family: "Helvetica", sans-serif;
line-height: 1.6;
min-height: 300vh;
margin: 0;
font-size: clamp(1rem, 1rem + 1vw, 1.5rem);
}
h1 {
line-height: 1.25;
}
.container {
max-width: 800px;
margin: 0 auto;
padding: clamp(1rem, 2vw, 5rem);
}
.progress {
height: 1rem;
background: blue;
position: fixed;
top: 0;
left: 0;
width: 100%;
transform-origin: 0 50%;
animation: scaleProgress auto linear;
animation-timeline: scroll(root);
}
@keyframes scaleProgress {
0% {
transform: scaleX(0);
}
100% {
transform: scaleX(1);
}
}
```
##### CSS Explanation
1. **Global Styles**:
```css
* {
box-sizing: border-box;
}
```
- Ensures consistent box-sizing for all elements.
2. **Body Styling**:
```css
body {
font-family: "Helvetica", sans-serif;
line-height: 1.6;
min-height: 300vh;
margin: 0;
font-size: clamp(1rem, 1rem + 1vw, 1.5rem);
}
```
- Sets the body font, line height, minimum height, and responsive font size.
3. **Heading Styling**:
```css
h1 {
line-height: 1.25;
}
```
- Adjusts the heading line height for better readability.
4. **Container Styling**:
```css
.container {
max-width: 800px;
margin: 0 auto;
padding: clamp(1rem, 2vw, 5rem);
}
```
- Sets a maximum width, centers the container, and adds responsive padding.
5. **Progress Bar Styling**:
```css
.progress {
height: 1rem;
background: blue;
position: fixed;
top: 0;
left: 0;
width: 100%;
transform-origin: 0 50%;
animation: scaleProgress auto linear;
animation-timeline: scroll(root);
}
```
- Styles the progress bar to be fixed at the top, spanning the full width of the viewport, with an animation that tracks the scroll progress.
6. **Keyframes for Progress Bar Animation**:
```css
@keyframes scaleProgress {
0% {
transform: scaleX(0);
}
100% {
transform: scaleX(1);
}
}
```
- Defines the animation for the progress bar, scaling its width from 0% to 100% as the user scrolls.
### Conclusion
This simple yet effective setup creates a dynamic blog post that visually engages readers with a scroll-tracking progress bar. By understanding each component, you can easily customize and extend this template for your specific needs. | dharamgfx |
1,868,952 | Srivinayakarmc building and materials | Introduction With a focus on sustainability and innovation, Srivinayakarmc is dedicated to... | 0 | 2024-05-29T12:19:14 | https://dev.to/sri_vinayaka_6dfa8d99d718/srivinayakarmc-building-and-materials-2gbe | Introduction
With a focus on sustainability and innovation, [Srivinayakarmc](https://www.srivinayakarmc.com/bluemetals.html
) is dedicated to supporting builders, contractors, and developers in creating durable and efficient structures.
Diverse Range of Building Materials
Srivinayakarmc prides itself on offering an extensive selection of construction materials, catering to various stages of building projects:
1. Cement and Concrete: Cement is the cornerstone of any construction project, and Srivinayakarmc ensures that only top-grade options are available. Their concrete mixes are designed to provide superior strength and durability, sourced from trusted manufacturers known for their consistency and quality.
2. Steel and Reinforcement: Structural integrity is paramount in construction, and Srivinayakarmc provides a range of high-quality steel bars, rods, and mesh. These materials undergo rigorous testing to ensure they meet industry standards, providing the necessary support for robust construction.
3. Bricks and Blocks: From traditional clay bricks to modern alternatives like fly ash bricks, hollow blocks, and solid blocks, [Srivinayakarmc](https://www.srivinayakarmc.com/bluemetals.html
) offers products that enhance the strength and energy efficiency of buildings. Their selection ensures that builders have access to the right materials for any project.
4. Aggregates: Essential for concrete production and other construction uses, Srivinayakarmc’s aggregates, including gravel, crushed stone, and sand, are carefully processed and selected for their quality and consistency.
5. Plumbing and Electrical Supplies: Recognizing the importance of comprehensive supply solutions, Srivinayakarmc offers a wide array of plumbing and electrical materials. Their inventory includes pipes, fittings, wires, switches, and more, ensuring all aspects of construction are well-covered.
6. Tiles and Flooring: Aesthetic and functionality are critical in flooring solutions. [Srivinayakarmc ](https://www.srivinayakarmc.com/bluemetals.html
)provides an extensive range of tiles, including ceramic, porcelain, and vitrified options, alongside other flooring materials, catering to diverse design preferences and durability needs.

| sri_vinayaka_6dfa8d99d718 | |
1,868,951 | nvidia-dkms-545 error in Ubuntu 24.04 | nvidia-dkms-545 module won't build in Ubuntu 24.04 LTS with kernel 6.8. The only solution for now to... | 0 | 2024-05-29T12:18:41 | https://dev.to/ordigital/nvidia-dkms-545-error-in-ubuntu-2404-585b | nvidia, ubuntu, lts, kernel | `nvidia-dkms-545` module won't build in Ubuntu 24.04 LTS with kernel 6.8. The only solution for now to have `nvidia-driver-545` with CUDA is to remove `linux-image-6.8.0-31-generic` and use `linux-image-6.5.0-35-generic` instead. | ordigital |
1,868,950 | Logo Design | Logo Design Services | Logo Designers Near Me | Get unique and impactful logos with our custom logo design services. From business logos to t-shirt... | 0 | 2024-05-29T12:18:30 | https://dev.to/prachi_pare_e410f7b6715d0/logo-design-logo-design-services-logo-designers-near-me-47lj | [Get unique and impactful logos with our custom logo design services. From business logos to t-shirt logos, our professional logo designers near you use the latest AI and 3D design techniques to create memorable brand logos. Enhance your brand identity today.](https://bhagirathtechnologies.com/services/5) | prachi_pare_e410f7b6715d0 | |
1,868,949 | Srivinayakarmc Building material supplier near me | Customer-Centric Approach At Srivinayakarmc, customer satisfaction is paramount. They understand that... | 0 | 2024-05-29T12:15:37 | https://dev.to/sri_vinayaka_6dfa8d99d718/srivinayakarmc-building-material-supplier-near-me-3kdp | Customer-Centric Approach
At [Srivinayakarmc](https://www.srivinayakarmc.com/rmc.html
), customer satisfaction is paramount. They understand that every construction project has unique requirements, and they strive to provide tailored solutions that meet these needs. Their team of knowledgeable and experienced professionals is always available to offer expert advice and guidance, helping customers select the right materials for their projects.
Sustainability and Innovation
Srivinayakarmc is committed to promoting sustainable construction practices. They offer eco-friendly materials that reduce the environmental impact of construction activities. By choosing products that are energy-efficient and sustainable, Srivinayakarmc helps builders create structures that are not only strong and durable but also environmentally responsible.
Innovation is another cornerstone of [Srivinayakarmc’s](https://www.srivinayakarmc.com/rmc.html
) philosophy. They continuously explore new technologies and materials that can improve construction processes and outcomes. By staying at the forefront of industry advancements, Srivinayakarmc ensures that their customers have access to the latest and most effective building solutions.
Reliable Logistics and Supply Chain
A critical factor in the construction industry is the timely delivery of materials. Srivinayakarmc has developed a robust logistics and supply chain network that ensures prompt and reliable delivery of materials to construction sites. Their efficient logistics management minimizes delays and helps keep projects on schedule.
Building Strong Relationships
Over the years, Srivinayakarmc has built strong relationships with a diverse range of clients, from individual builders to large construction companies. Their reputation for reliability, quality, and customer service has earned them the trust and loyalty of clients across the industry.
Conclusion
[Srivinayakarmc](https://www.srivinayakarmc.com/rmc.html
) stands out as a premier supplier of building construction materials, offering a comprehensive range of products, unwavering commitment to quality, and exceptional customer service. Whether you are embarking on a small residential project or a large commercial development, Srivinayakarmc is your trusted partner, providing the materials and support you need to build with confidence and excellence. With Srivinayakarmc, you can be assured that your construction projects are in capable and reliable hands.
 | sri_vinayaka_6dfa8d99d718 | |
1,868,948 | React Hooks: A Comprehensive Guide | useState The useState Hook helps you add state to functional components. It takes in an... | 0 | 2024-05-29T12:13:53 | https://dev.to/elightwalk/react-hooks-a-comprehensive-guide-394o | reacthook, react, reactjsdevelopment, reactdeveloper | ## **useState**
The useState Hook helps you add state to functional components. It takes in an initial value and returns an array with two elements: the current state value and a function to update it. This replaces the need for class components to have a constructor and call the setState method.
###Syntax
```
const [state, setState] = useState(initialState);
```
Example
```
import React, { useState } from 'react';
function Counter() {
const [count, setCount] = useState(0);
return (
<div>
<p>You clicked {count} times</p>
<button onClick={() => setCount(count + 1)}>
Click me
</button>
</div>
);
}
```
##**useEffect**
The useEffect hook in React allows us to perform side effects in our components. It takes in a callback function and an array of dependencies. The callback function is executed after every render, and the dependencies array ensures that the effect is only re-run if any dependencies change. This hook is useful for fetching data, subscribing to events, and performing other side effects in a component's lifecycle.
###Syntax
```
useEffect(() => {
// Side effect logic
return () => {
// Cleanup logic
};
}, [dependencies]);
```
1. No Dependency Array: The effect runs after every render.
2. Empty Dependency Array ([]): The effect runs only once after the initial render.
3. Dependencies in the Array: The effect runs whenever any of the dependencies change.
###1. Running Effect After Every Render
```
useEffect(()=>{
// Code
})
```
###2. Running Effect Only Once (After Initial Render)
```
useEffect(()=>{
//Code
},[]) //componentDidMount
```
###3. Running Effect When Dependencies Change
```
useEffect(()=>{
//Code
},[props]) //componentDidUpdate
```
###Example
```
import React, { useState, useEffect } from 'react';
function Example() {
const [count, setCount] = useState(0);
useEffect(() => {
document.title = `You clicked ${count} times`;
return () => {
document.title = 'React App';
};
}, [count]);
return (
<div>
<p>You clicked {count} times</p>
<button onClick={() => setCount(count + 1)}>
Click me
</button>
</div>
);
}
```
##**useContext**
The useContext hook provides a convenient way of consuming data from a React context without having to use the traditional `Context.Consumer` component. It accepts a context object as an argument and returns the current context value for that context. This allows us to access context values in any component within the context provider, making it easier to manage the global state.
###**Syntax**
```
const value = useContext(MyContext);
```
###**Example**
```
import React, { useContext } from 'react';
const ThemeContext = React.createContext('light');
function ThemedComponent() {
const theme = useContext(ThemeContext);
return <div style={{ background: theme === 'light' ? '#fff' : '#333', color: theme === 'light' ? '#000' : '#fff' }}>
Theme is {theme}
</div>;
}
function App() {
return (
<ThemeContext.Provider value="dark">
<ThemedComponent />
</ThemeContext.Provider>
);
}
```
##**useRef**
The useRef hook allows us to access DOM elements or values that persist between renders in functional components. It returns a mutable ref object that we can use to store any value that persists throughout the component's lifespan. This is useful when we need to access the DOM element directly or store a value that needs to be accessed in different renders.
###**Syntax**
```
const refContainer = useRef(initialValue);
```
###**Example**
```
import React, { useRef, useEffect } from 'react';
function FocusInput() {
const inputRef = useRef(null);
useEffect(() => {
inputRef.current.focus();
}, []);
return <input ref={inputRef} type="text" />;
}
```
##**useMemo**
The useMemo hook allows us to optimize our application's performance by memoizing a function's result. This means the function will only be re-executed if its dependencies change. This is most useful when dealing with expensive calculations or when the exact value is used multiple times in a component. By using useMemo, we can avoid unnecessary re-rendering and improve our application's overall performance.
###**Syntax**
```
const memoizedValue = useMemo(() => computeExpensiveValue(a, b), [a, b]);
```
###**Example**
```
import React, { useMemo } from 'react';
function ExpensiveComponent({ items }) {
const sortedItems = useMemo(() => {
return items.sort((a, b) => a - b);
}, [items]);
return (
<div>
{sortedItems.map(item => <div key={item}>{item}</div>)}
</div>
);
}
```
##**useCallback**
Only if one of the dependencies has changed does the memoized callback function returned by the useCallback hook alter. This is useful to prevent unnecessary re-renders.
The useCallback Hook is similar to useMemo, but instead of returning a memoized value, it returns a memoized callback function. This is useful when we need to pass a callback to a child component and want to prevent unnecessary re-renders.
###Syntax
```
const memoizedCallback = useCallback(() => {
doSomething(a, b);
}, [a, b]);
```
###**Example**
```
import React, { useState, useCallback } from 'react';
function Button({ onClick }) {
console.log('Button re-rendered');
return <button onClick={onClick}>Click me</button>;
}
function ParentComponent() {
const [count, setCount] = useState(0);
const increment = useCallback(() => {
setCount(c => c + 1);
}, []);
return (
<div>
<Button onClick={increment} />
<p>{count}</p>
</div>
);
}
```
##**Conclusion**
React Hooks such as useState, useEffect, useContext, useRef, useMemo, and useCallback provide powerful capabilities for managing state, side effects, context, and performance optimizations in functional components. Understanding and utilizing these hooks effectively allows you to create more efficient, readable, and maintainable React applications.
These are the most commonly used React Hooks, but there are more, such as useRef for accessing DOM elements and useReducer for managing complex states. React Hooks provides a more intuitive and concise way of writing React code, improving code readability and maintainability. They also eliminate the need for higher-order components and render props, making code easier to understand for beginners. Our [reactjs developers](https://www.elightwalk.com/hire-us/hire-reactjs-developer) are aware of using hooks in React development services roughly. So, next time you start a new React project, consider using Hooks to improve your project's flexibility or hire an expert [React development](https://www.elightwalk.com/services/reactjs-development) team from Elightwalk Technology.
| elightwalk |
1,868,947 | Srivinayakarmc Building construction materials supplier | Introduction In the dynamic and demanding world of construction, the quality of materials can make or... | 0 | 2024-05-29T12:13:43 | https://dev.to/sri_vinayaka_6dfa8d99d718/srivinayakarmc-building-construction-materials-supplier-461f | Introduction
In the dynamic and demanding world of construction, the quality of materials can make or break a project. [Srivinayakarmc](https://www.srivinayakarmc.com/index.html
), a leading supplier of building construction materials, stands as a pillar of reliability and excellence. With a comprehensive range of products and an unwavering commitment to quality, Srivinayakarmc has established itself as a trusted partner for builders, contractors, and developers.
Comprehensive Product Range
Srivinayakarmc offers an extensive selection of building materials that cater to every phase of construction. From the foundation to the finishing touches, their product portfolio includes:
1. Cement and Concrete: Srivinayakarmc supplies top-grade cement and concrete mixes that ensure strong, durable, and resilient structures. Their products are sourced from reputable manufacturers known for their adherence to quality standards.
2. Steel and Reinforcement: The company provides high-quality steel bars, rods, and mesh that are essential for reinforcing concrete and adding structural integrity to buildings. These materials are tested rigorously to meet industry specifications.
3. Bricks and Blocks: Offering a variety of bricks and blocks, including fly ash bricks, hollow blocks, and solid blocks, [Srivinayakarmc](https://www.srivinayakarmc.com/index.html
) ensures that builders have the best options for constructing walls and partitions that are both robust and energy-efficient.
4. Aggregates: Essential for concrete production and other construction activities, the aggregates supplied by Srivinayakarmc are of superior quality. Their range includes gravel, crushed stone, and sand, all of which are carefully selected and processed.
5. Plumbing and Electrical Supplies: Srivinayakarmc also provides a wide array of plumbing and electrical materials, ensuring that every aspect of the construction process is covered. From pipes and fittings to wires and switches, their inventory is both comprehensive and reliable.
6. Tiles and Flooring: The company offers an extensive range of tiles, including ceramic, porcelain, and vitrified options, along with other flooring materials. These products are designed to meet aesthetic and functional requirements, ensuring beautiful and long-lasting finishes.
7. Paints and Coatings: For the finishing touch, Srivinayakarmc supplies high-quality paints, primers, and coatings that protect and enhance the appearance of buildings. Their range includes products from leading brands known for their durability and aesthetic appeal.
Commitment to Quality
[Srivinayakarmc’s](https://www.srivinayakarmc.com/index.html
) dedication to quality is evident in every aspect of their operations. They source materials from renowned manufacturers who adhere to stringent quality control measures. Additionally, Srivinayakarmc conducts regular inspections and quality checks to ensure that all products meet industry standards and exceed customer expectations.

| sri_vinayaka_6dfa8d99d718 | |
1,868,946 | What do you think of this website, which is made in PHP? | We are seeking recommendations or reviews from experts regarding this website rto codes , which is... | 0 | 2024-05-29T12:10:08 | https://dev.to/saqibdev/what-do-you-think-of-this-website-which-is-made-in-php-19fn | wordpress, php, design, javascript | We are seeking recommendations or reviews from experts regarding this website [rto codes](https://infohindime.online/) , which is developed using PHP. | saqibdev |
1,868,945 | Test | test 1 2 3 | 0 | 2024-05-29T12:08:24 | https://dev.to/vdevk/test-1562 | test 1 2 3 | vdevk | |
1,868,944 | Remote Peering: Addressing the Challenges of Modern Networking | The rapid evolution of global cloud services, digital communication, and the surge in internet users... | 0 | 2024-05-29T12:07:37 | https://dev.to/noor_butt_tech/remote-peering-addressing-thechallenges-of-modern-networking-12ep | The rapid evolution of global cloud services, digital communication, and the
surge in internet users has significantly increased the demand for efficient
internet traffic exchange. With over [5.18 billion internet users worldwide as of April 2023](https://datareportal.com/reports/digital-2023-april-global-statshot) and a predicted monthly internet traffic of 150.7 exabytes,
largely driven by video content consumption, the need for effective peering
solutions is more critical than ever. This article delves into the concept of
remote peering, its benefits, and the challenges it addresses in modern
networking.
**Understanding Peering and Remote Peering**
**What is Peering?**
Peering is the direct exchange of internet traffic between networks at an
Internet Exchange Point (IXP). This practice enhances internet
performance by reducing reliance on third-party transit providers and
enabling faster, more reliable internet experiences for end-users through
efficient data exchange with fewer hops. Traditional peering requires a
physical Point of Presence (PoP) at the IXP, involving hardware installation,
connection fees, and management of multiple supplier relationships.
**The Emergence of Remote Peering**
Remote peering offers a revolutionary alternative to traditional direct
peering. It allows organizations to connect to an IXP without a physical
presence at the exchange point. Through a service provider like Epsilon,
which has pre-existing connections to peering platforms, businesses can
access IXPs remotely. This method simplifies the peering process, reduces
costs, and broadens access to multiple IXPs globally.
**The Shift from Direct Peering to Remote Peering**
Benefits of Remote Peering Cost Efficiency
One of the most significant advantages of remote peering is cost
efficiency. Traditional direct peering entails substantial expenses, including
hardware installation, colocation fees, and ongoing maintenance costs.
Remote peering eliminates these costs, allowing businesses to access
multiple IXPs through a single interconnection port. This setup requires
only one cross-connect to the service port, significantly reducing overall
expenses. For example, consolidating direct peering services with five IXs
into Epsilon's [remote peering service](https://epsilontel.com/solutions/remote-peering/) can achieve up to 40% savings.
Simplified Management
Managing multiple IXPs can be complex and time-consuming. Each IXP
has different Service Level Agreements (SLAs) and membership
requirements. Remote peering simplifies this process by providing a single
contract and end-to-end SLAs through the remote peering provider. This
approach streamlines vendor management, reducing administrative
burdens and enhancing operational efficiency.
Greater Reach and Flexibility
Remote peering enables businesses to connect to a broader range of
networks at different IXPs, enhancing global reach and network
performance. Providers like Epsilon connect to over 16 leading IXs
worldwide, offering access to 120+ internet exchange on-ramps and a
peering community of over 8,000 members. This extensive network allows
businesses to establish peering connections at strategic locations,
reducing latency and improving data transmission efficiency.
**Enhancing Network Performance and Security**
Improved Network Performance
Remote peering significantly improves network performance by facilitating
easy connections with a diverse range of networks at various IXPs. By
reducing the distance data packets must travel between peering partners,
remote peering minimizes latency and enhances overall network speed.
Additionally, network administrators gain greater control over traffic
routing, allowing them to bypass intermediate networks and congestion
points.
Increased Security and Resiliency
The dedicated Layer 2 connectivity provided by remote peering ensures
high-speed, secure, and reliable connections between businesses and
internet exchanges. This setup enhances network resiliency by providing
alternative pathways for data transmission, reducing the risk of service
disruptions. Providers like Epsilon offer industry-leading SLAs, ensuring
that businesses receive dependable and secure peering services.
**Operational Simplicity and Reduced Costs**
Simplified Onboarding and Management
Remote peering offers operational simplicity by automating the process of
joining an IXP and establishing peering policies. Autonomous systems
handle the technical aspects of peering, freeing businesses from the need
to manage individual connections with numerous networks. Epsilon's
Network-as-a-Service (NaaS) platform, Infiny, further simplifies this process
by enabling self-provisioning, scaling, and monitoring of peering and
connectivity services through a user-friendly portal.
Significant Cost Savings
Remote peering provides substantial cost savings compared to direct
peering. Businesses can avoid physical interconnection fees, colocation
expenses, and hardware deployment costs. Additionally, remote peering
reduces ongoing operational costs by consolidating peering services into a
single, efficient interconnection port. For instance, consolidating direct
peering with five IXs into Epsilon's remote peering service can result in a
total cost reduction of up to 14% for cross-continental connections
between Asia and Europe.
**Case Studies: Real-World Applications of Remote Peering**
Scenario 1: European Consolidation
A customer consolidated its direct peering services with five IXs within
Europe into Epsilon's remote peering service using a 100G interconnection
port. This consolidation reduced the number of required physical ports and
colocation sites from five to one, resulting in significant cost savings. The
total annual cost for direct peering was $171,420, while the remote peering
solution cost $103,476, achieving a 40% reduction in expenses.
Scenario 2: Asia-Europe Integration
Another customer integrated its direct peering services with two local IXs
in Asia and three IXs in Europe into Epsilon's remote peering service. This
setup included a Data Centre Interconnect (DCI) between Asia and Europe,
utilizing a 100G interconnection port. The total cost for direct peering was
$171,420, while the remote peering solution, including the DCI, cost
$148,176, resulting in a 14% cost saving.
Conclusion
Remote peering is transforming the landscape of internet traffic exchange
by offering a cost-effective, efficient, and flexible alternative to traditional
direct peering. By eliminating the need for physical presence at IXPs,
remote peering enables businesses to expand their global reach, improve
network performance, enhance security, and simplify operational
management. Providers like Epsilon are at the forefront of this
transformation, offering robust infrastructure, extensive IXP connections,
and user-friendly platforms that empower businesses to optimize their
peering strategies.
As the demand for efficient internet traffic exchange continues to grow,
remote peering will play an increasingly vital role in supporting the
connectivity needs of modern organizations. By leveraging the benefits of
remote peering, businesses can unlock new opportunities for global
connectivity, ensuring that they remain competitive in the ever-evolving
digital landscape.
| noor_butt_tech | |
1,868,943 | 🤷♀️Mastering JavaScript Console Methods: Boost Your Debugging Skills!🚀 | 1. Logging to the Console The most basic and frequently used method is console.log(). It... | 0 | 2024-05-29T12:05:53 | https://dev.to/dharamgfx/mastering-javascript-console-methods-boost-your-debugging-skills-3eda | javascript, webdev, beginners, programming | ### 1. Logging to the Console
The most basic and frequently used method is `console.log()`. It prints messages to the console, making it easy to debug and inspect values.
**Example:**
```javascript
console.log('Hello, World!');
console.log('The value of x is:', x);
```
### 2. Logging Levels with `info`, `warn`, and `error`
Different logging levels help categorize the importance and type of messages:
- **`console.info()`**: Informational messages.
- **`console.warn()`**: Warnings that don't stop the execution.
- **`console.error()`**: Errors that usually indicate a problem.
**Example:**
```javascript
console.info('This is an informational message.');
console.warn('This is a warning message.');
console.error('This is an error message.');
```
### 3. Displaying Tables using `console.table()`
The `console.table()` method displays data in a table format, making it easier to read arrays and objects.
**Example:**
```javascript
const users = [
{ name: 'Alice', age: 25 },
{ name: 'Bob', age: 30 },
];
console.table(users);
```
### 4. Counting using `console.count()`
`console.count()` counts the number of times it's called with the same label, which is useful for tracking the frequency of a specific action or event.
**Example:**
```javascript
console.count('button clicked');
console.count('button clicked');
```
### 5. Adding Timers using `console.time()` and `console.timeEnd()`
Timers help measure the time taken by a block of code. Start the timer with `console.time(label)` and end it with `console.timeEnd(label)`.
**Example:**
```javascript
console.time('myTimer');
// Code to measure
console.timeEnd('myTimer');
```
### 6. Grouping Logs using `console.group()`
`console.group()` and `console.groupEnd()` create collapsible groups in the console, organizing related messages together.
**Example:**
```javascript
console.group('User Details');
console.log('Name: Alice');
console.log('Age: 25');
console.groupEnd();
```
### 7. Creating Traces using `console.trace()`
`console.trace()` outputs a stack trace, showing the path to where the trace was called. It’s useful for debugging and understanding the flow of your code.
**Example:**
```javascript
function a() {
b();
}
function b() {
c();
}
function c() {
console.trace('Trace in function c');
}
a();
```
### 8. Cleaning Up using `console.clear()`
`console.clear()` clears the console, providing a clean slate.
**Example:**
```javascript
console.log('This will be cleared');
console.clear();
```
### Summary
The JavaScript console provides powerful methods for logging, debugging, and organizing your output. By using these methods—`log`, `info`, `warn`, `error`, `table`, `count`, `time`, `timeEnd`, `group`, `groupEnd`, `trace`, and `clear`—you can enhance your development workflow and efficiently debug your code. Happy coding! | dharamgfx |
1,864,194 | Build A Dual-Purpose App: Text-to-Image and Custom Chatbot Using Comet, GPT-3.5, DALL-E 2, and Streamlit | Overview In this guide, we will explore how to create a dual-purpose application: a chatbot powered... | 0 | 2024-05-29T12:05:46 | https://dev.to/hitsubscribe/build-a-dual-purpose-app-text-to-image-and-custom-chatbot-using-comet-gpt-35-dall-e-2-and-streamlit-14o5 | python, webdev, ai | <h2 class="graf graf--h3" style="padding-left: 40px;">Overview</h2>
<p class="graf graf--p">In this guide, we will explore how to create a dual-purpose application: a chatbot powered by custom dataset and a text-to-image generator, using OpenAI’s GPT-3.5 turbo and DALL-E 2 models, along with Comet and Streamlit.</p>
<img class="size-full wp-image-96752 aligncenter" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/All-technologies.png" alt="" width="667" height="607" />
<p class="graf graf--p">Now let’s take a brief look at the infrastructures we will be using.</p>
<h2 class="graf graf--h3">Comet</h2>
<p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="https://www.comet.com/" target="_blank" rel="noopener" data-href="https://www.comet.com/">Comet</a> is a platform that offers real-time experiment tracking with additional collaboration features. With Comet you can log your object detection models (YOLO, Tensorflow), large language models, regression and classification models and the like, with their various parameters. It also gives you the capability to monitor the training and prompting of all of these models and provides you with the option to share your logged projects publicly or privately with your team.</p>
<p class="graf graf--p">One advantage Comet has over similar platforms is its ability to easily integrate with your existing infrastructure and tools so you can manage, visualize, and optimize models from training runs to production monitoring. <img class="wp-image-96753 size-full aligncenter" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/COMET-HOMEPAGE-e1713469576872.png" alt="" width="900" height="421" /></p>
<h2 class="graf graf--h3">GPT-3.5 Turbo Model</h2>
<p class="graf graf--p">According to OpenAI, the GPT-3.5 Turbo is a model that improves on GPT-3.5 and can understand as well as generate natural language or code. With the help of user feedback, OpenAI has improved the GPT-3.5 Turbo language model, making it more proficient at understanding and following instructions. Being a fine-tuned model by OpenAI, it has been given examples of inputs and expected outputs to train (fine-tune) it for a particular task. OpenAI created GPT-3.5 Turbo as an expansion of their well-liked GPT-3 model. The GPT-3.5-Turbo-Instruct is available in three model sizes: 1.3B, 6B, and 175B parameters.</p>
<h2 class="graf graf--h3">DALL-E 2</h2>
<p class="graf graf--p">DALL·E 2 is an AI system that can create realistic images and art from a description in natural language. Below is an image generated by this app by running a prompt “<strong class="markup--strong markup--p-strong">A cup pouring fire as a portal to another dimension.</strong>”</p>
<img class="aligncenter wp-image-96754 size-full" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/cup-to-another-dimension-e1713469651769.png" alt="" width="850" height="850" />
<h2 class="graf graf--h3">Streamlit</h2>
<p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="https://streamlit.io/" target="_blank" rel="noopener" data-href="https://streamlit.io/">Streamlit</a> is a platform that enables you to build web applications that can be hosted in the cloud in just minutes. It helps you build interactive dashboards, generate reports, or create chat applications. Once you’ve created an app, you can use the Community Cloud platform to deploy, manage, and share your application.</p>
<p class="graf graf--p">This <a class="markup--anchor markup--p-anchor" href="https://onyejiakutheophilus-dataprofessionals-ap-prediction-page-jdvug7.streamlit.app/" target="_blank" rel="noopener" data-href="https://onyejiakutheophilus-dataprofessionals-ap-prediction-page-jdvug7.streamlit.app/">application</a> is an example of deploying with Streamlit.</p>
<h2>Prerequisites</h2>
In this section, we will quickly take a look at some of the tools you will need to successfully follow along with these steps and ultimately build your own application.
<ul>
<li><strong>Python</strong>: A high level programming language for many use cases. For this project, we will be using <strong>Python version 3.9.</strong> This project will still work with any older version of Python. Proceed <a href="https://www.python.org/downloads/">here</a> to download any version of Python from the list of operating systems available. Ensure to add Python to your PC environment variable by following this <a href="https://phoenixnap.com/kb/add-python-to-path">guide</a>.</li>
<li><strong>pip: </strong>A package installer used in python. It is very important to have pip running in your PC for you to be able to flow along with this project. See this <a href="https://phoenixnap.com/kb/install-pip-windows">guide</a> on how to install pip and add it to your PC path.</li>
<li><strong>Pycharm IDE</strong>: <a href="https://www.jetbrains.com/pycharm/">Pycharm</a> is the integrated development environment we will be using to build the application. It is simply where we will be writing our code. It is easy to install and saves you a lot of coding time, by assisting with code completion, code navigation, code refactoring and debugging. The community edition of this software is free! Once you create and give a name to any new project, it provides you with a Python virtual environment (venv) that enables the installation of libraries specifically for that project as opposed to sharing them with all users of the computer.</li>
<li><strong>Dataset</strong>: The dataset we will be using in this project for training the LLM can be found <a href="https://github.com/prust/wikipedia-movie-data/blob/master/movies-2020s.json">here</a>. Taking a closer look at the dataset structure, as seen in the figure below for the first two movies from the dataset, we will need only the movie's "<strong>title</strong>", "<strong>year</strong>", "<strong>genre</strong>" and the "<strong>extract</strong>". This structure of the dataset is very important to take into consideration; when we get to the coding part of this project, we will look into that.</li>
</ul>
<h3 class="graf graf--h3"><img class="aligncenter wp-image-96807 size-full" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/dataset-image-e1713470396898.png" alt="" width="950" height="457" /></h3>
<h3 class="graf graf--h3">Now Let’s Get Started!</h3>
<p class="graf graf--p">To achieve our objective, we will be following just 5 simple steps.</p>
<h2 class="graf graf--h3">Step 1: Create a Comet account to log your LLM</h2>
<p class="graf graf--p">Now, if you haven't already, go to <a class="markup--anchor markup--p-anchor" href="https://www.comet.com/" target="_blank" rel="noopener" data-href="https://www.comet.com/">Comet</a> and create a new account. After successfully creating your account, head on to <a class="markup--anchor markup--p-anchor" href="https://www.comet.com/account-settings/apiKeys" target="_blank" rel="noopener" data-href="https://www.comet.com/account-settings/apiKeys">API key section </a>to get a copy of your comet API key.</p>
<img class="aligncenter wp-image-96755 size-full" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/Comet-Api-key-e1713470412659.png" alt="" width="950" height="441" />
<p class="graf graf--p">Here, you can generate an API key for all your projects. You will need the API key for this part of the project. Copy and save it somewhere.</p>
<h2 class="graf graf--h3">Step 2: Create an OpenAI account to access OpenAI API</h2>
<p class="graf graf--p">If you are new to OpenAI, create your OpenAI account <a class="markup--anchor markup--p-anchor" href="https://chat.openai.com/" target="_blank" rel="noopener" data-href="https://chat.openai.com/">here</a>. Once you’ve successfully created an account, go on to <a class="markup--anchor markup--p-anchor" href="https://platform.openai.com/api-keys" target="_blank" rel="noopener" data-href="https://platform.openai.com/api-keys">API key section</a> by using the same credentials you used when creating your account. On the left panel of the screen, click on “<strong class="markup--strong markup--p-strong">API Keys</strong>” and then proceed to click on “<strong class="markup--strong markup--p-strong">Create new secret key</strong>”. This is shown below:</p>
<img class="aligncenter wp-image-96756 size-full" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/openai-apikeys-e1713469875863.png" alt="" width="950" height="479" />
<p class="graf graf--p">Next, you get a pop-up, as shown below, asking you to give a name for your secret key. Proceed to give it any name and click the “<strong>Create secret key</strong>” option.</p>
<img class="aligncenter wp-image-96757 size-full" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/input-a-test-key-e1713469912610.png" alt="" width="950" height="477" />
<p class="graf graf--p">Once done, you get the response to save your key. Ensure to copy and save your API key somewhere as you might loose it if you do not copy it instantly.</p>
<img class="size-full wp-image-96758 aligncenter" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/api-key.png" alt="" width="1210" height="685" />
<blockquote class="graf graf--blockquote">Note: This API key is very important to save immediately. Once you close this pop up, you will not be able to get the key again and you will then need to create a new key from scratch. It is therefore necessary you copy it and save it somewhere on your PC. An MS word will do just fine.</blockquote>
<h3 class="graf graf--h3">Building the Application</h3>
<p class="graf graf--p">Now its time to build the application. You can use any IDE of your choice (I used Pycharm). We will need the following libraries for the successful development of this application:</p>
<ul class="postList">
<li class="graf graf--li"><strong class="markup--strong markup--li-strong">comet-llm</strong>: This is a tool that will be used to log and visualize our LLM prompts.</li>
<li class="graf graf--li"><strong class="markup--strong markup--li-strong">openai</strong>: This is the tool with which we will be using the GPT 3.5 turbo and DALL-E 2 API’s.</li>
<li class="graf graf--li"><strong class="markup--strong markup--li-strong">Streamlit</strong>: An open-source framework used for building data science and machine learning applications.</li>
<li><strong>json</strong>: Python module for encoding and decoding JSON data.</li>
<li><strong>urllib.request</strong>: Python module for making HTTP requests and working with URLs.</li>
</ul>
<h2 class="graf graf--h3">Step 3: Install all Dependencies</h2>
You first create a new project in your Pycharm IDE and give it any name. This way, you automatically have an environment to start coding with your Python interpreter and other packages.
<p class="graf graf--p">Now, in your IDE terminal, run the following commands to install all the dependencies:</p>
<pre class="lang:python decode:true ">pip install openai streamlit comet_llm</pre>
<p class="graf graf--p">Once done successfully, you will need to configure your API key from OpenAI.</p>
<h2 class="graf graf--h3">Step 4: Configure your OpenAI API key</h2>
<p class="graf graf--p">Inside your IDE directory, create a new folder called “.<strong class="markup--strong markup--p-strong">streamlit”</strong> and create a new file, “<strong class="markup--strong markup--p-strong">secrets.toml”</strong> file inside it. It will look like this snippet shown below:</p>
<img class="size-full wp-image-96759 aligncenter" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/SECRET-FILE.png" alt="" width="700" height="133" />
<p class="graf graf--p">Now open the “<strong class="markup--strong markup--p-strong">secrets..toml” </strong>file and add the following line of text:</p>
<pre class="lang:default decode:true ">MY_KEY = "Now copy your Openai API key you copied before now and paste it here to replace this."</pre>
<p class="graf graf--p">Make sure to replace <strong class="markup--strong markup--p-strong">“Now copy your OpenAI API key you copied before and paste it here.”</strong> with your actual OpenAI API key. After adding this line, save the file.</p>
<h2 class="graf graf--h3">Step 5: Write your Code</h2>
<p class="graf graf--p">Now create a new python script and give it any name. For this, I named mine “<strong class="markup--strong markup--p-strong">dualapp”</strong>. Below is the code to build this dual-purpose app with inline explanation for each line of code.</p>
<pre class="lang:python decode:true ">import streamlit as st
import openai
from openai import OpenAI
import comet_llm
import json
from urllib.request import urlopen
# Initialize OpenAI client
client = OpenAI(api_key=st.secrets["MY_KEY"])
# Load the JSON content of movies from the provided URL
response = urlopen("https://raw.githubusercontent.com/prust/wikipedia-movie-data/master/movies-2020s.json")
# Limit to the first 100 items
train_data = json.loads(response.read())[:100]
# Extract relevant information for training
movie_info = """
Hi! I am a chatbot designed to assist you.
Here are some movies you might find interesting:
"""
for entry in train_data:
title = entry.get('title', '')
year = entry.get('year', '')
genres = ", ".join(entry.get('genres', []))
extract = entry.get('extract', 'No extract available')
movie_info += f"- {title} ({year}) - Genres: {genres}\n"
movie_info += f" Extract: {extract}\n"
# Instruction for the model
instruction = """
You are strictly going to answer questions based on the movies provided to you. Do not discuss any other information that
has nothing to do with the movies provided to you.
I want you to take note of the year, title, genre, and extract of the movies and be able to answer questions on them.
"""
# Combine movie_info and instruction for the system message
system_message = instruction + "\n\n" + movie_info
selection = st.sidebar.selectbox("Chat Bot to Text to Image", ("Custom Chat Bot", "Text to Image"))
if selection == "Custom Chat Bot":
# Initialize Streamlit UI
st.title("This is a chatbot about Theo")
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat history
for message in st.session_state.messages:
if message["role"] == "user":
st.markdown(f"**You:** {message['content']}")
elif message["role"] == "assistant":
st.markdown(f"**💼:** {message['content']}")
# User input for new chat
prompt = st.text_input("📝", key="user_input_" + str(len(st.session_state.messages)))
if prompt:
st.session_state.messages.append({"role": "user", "content": prompt})
# Formulate message for OpenAI API
messages = [{"role": "system", "content": system_message}]
for message in st.session_state.messages:
messages.append({"role": message["role"], "content": message["content"]})
full_response = ""
for response in client.chat.completions.create(
messages=messages,
model="gpt-3.5-turbo",
stream=True,
):
full_response += (response.choices[0].delta.content or "")
st.session_state.messages.append({"role": "assistant", "content": full_response})
st.markdown(f"**💼:** {full_response}")
# Display user input field for next chat
st.text_input("📝", key="user_input_" + str(len(st.session_state.messages)))
# log LLM prompt on comet
comet_llm.log_prompt(
api_key="9HibPMbc18shhthis_is_my_api_key",
prompt=prompt,
output=full_response,
metadata={
"model": "gpt-3.5-turbo"
}
)
else:
# Initialize OpenAI client
client = OpenAI(api_key=st.secrets["MY_KEY"])
# Streamlit UI for Text to Image
st.title("DALL-E-2 Text-to-Image Generation")
# User input for text prompt
text_prompt = st.text_input("Enter a text prompt")
if text_prompt:
# Use the OpenAI API to generate image from text prompt
response = client.images.generate(
model="dall-e-2",
prompt=text_prompt,
size="1024x1024",
quality="standard",
n=1,
)
# Get the generated image URL from the OpenAI response
image_url = response.data[0].url
# Display generated image
st.image(image_url, caption="Generated Image", use_column_width=True)
</pre>
<p class="graf graf--p">Key take-aways from the code above:</p>
<ul class="postList">
<li class="graf graf--li">Initialize OpenAI client using the API key you copied from your OpenAI account.</li>
<li class="graf graf--li">With the variable <code class="markup--code markup--li-code">system_message</code> we are able to teach or give instruction to our model about any information.</li>
<li class="graf graf--li">Initialize the chat history.</li>
<li class="graf graf--li">We display the chat history.</li>
<li class="graf graf--li">We also provide a new chat for user input right away.</li>
<li class="graf graf--li">We formulate the message for OpenAI, then iteratively generate completions from a chat client using a GPT-3.5 Turbo model based on the provided messages.</li>
<li class="graf graf--li">We log the LLM prompt on Comet using the API key from Comet.</li>
</ul>
<h2 class="graf graf--h3">Run your App!</h2>
<p class="graf graf--p">Run the command below to run your app. The name I gave to this app is “<strong class="markup--strong markup--p-strong">dualapp” </strong>as mentioned before.<strong class="markup--strong markup--p-strong"> </strong></p>
<pre class="lang:python decode:true ">streamlit run dualapp.py</pre>
<p class="graf graf--p">Bravo! You’ll get the response shown below:</p>
<img class="size-full wp-image-96761 aligncenter" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/APP-RAN-SUCCESSFULLY.png" alt="" width="791" height="178" />
<p class="graf graf--p">Click on the link in the output message to view your app.</p>
<p class="graf graf--p">This is the home page of the app<img class="alignnone wp-image-96781 size-full" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/home-page-1-e1713470271585.png" alt="" width="950" height="418" /></p>
<p class="graf graf--p">Below is a Prompt using the chat bot</p>
<p class="graf graf--p"><img class="aligncenter wp-image-96782 size-full" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/Prompt-in-app-and-a-corresponding-log-on-comet-e1713470299390.png" alt="" width="950" height="418" />Below is the corresponding LLM log on comet. Visit <a class="markup--anchor markup--p-anchor" href="https://www.comet.com/theophilus/llm-general/prompts" target="_blank" rel="noopener" data-href="https://www.comet.com/theophilus/llm-general/prompts"><strong class="markup--strong markup--p-strong">here</strong></a> to view this page. Make sure to click on “Columns” in order to select the variables of the table you want to see as shown in the figure below:</p>
<p class="graf graf--p"><img class="aligncenter wp-image-96779 size-full" src="https://www.hitsubscribe.com/wp-content/uploads/2024/04/Log-on-Comet-1-e1713470314669.png" alt="" width="950" height="373" />Now lets explore the app:</p>
[video width="1280" height="644" mp4="https://www.hitsubscribe.com/wp-content/uploads/2024/04/YouCut_20240409_113404658.mp4"][/video]
<h3 class="graf graf--h3">Summary</h3>
<p class="graf graf--p">To successfully create this dual purpose app that integrates both text-to-image and custom chatbot, we followed the following steps:</p>
<ul class="postList">
<li class="graf graf--li"><strong class="markup--strong markup--li-strong">Step 1</strong>: Create a Comet account to log your LLM.</li>
<li class="graf graf--li"><strong class="markup--strong markup--li-strong">Step 2</strong>: Create an OpenAI account to access your OpenAI API keys.</li>
<li class="graf graf--li"><strong class="markup--strong markup--li-strong">Step 3</strong>: Install all dependencies.</li>
<li class="graf graf--li"><strong class="markup--strong markup--li-strong">Step 4</strong>: Configure your OpenAI API key.</li>
<li class="graf graf--li"><strong class="markup--strong markup--li-strong">Step 5</strong>: Write your code.</li>
</ul>
<p class="graf graf--p">Thank you for your time!</p>
<strong>Credit: Dataset from <a href="https://github.com/prust">Peter Rust</a> </strong> | theoonyejiaku |
1,868,942 | Google’s AI Research in Healthcare: Med-PaLM and Beyond | Introduction Artificial Intelligence (AI) has been making significant strides in various sectors,... | 27,548 | 2024-05-29T12:04:44 | https://dev.to/aishikl/googles-ai-research-in-healthcare-med-palm-and-beyond-2lbe | <h2>Introduction</h2>
<p>Artificial Intelligence (AI) has been making significant strides in various sectors, and healthcare is no exception. Google's AI research, particularly in the medical domain, has been groundbreaking. This blog delves into Google's advancements in healthcare AI, focusing on their medical large language model (LLM) research, including Med-PaLM and its successor Med-PaLM 2.</p>
<h2>Med-PaLM: A Revolutionary AI Model</h2>
<h3>The Genesis of Med-PaLM</h3>
<p>Med-PaLM is a large language model (LLM) designed to provide high-quality answers to medical questions. It harnesses the power of Google's large language models, which have been aligned to the medical domain and evaluated using medical exams, medical research, and consumer queries. <a href="https://sites.research.google/med-palm/">Learn more</a>.</p>
<h3>Med-PaLM 2: The Next Iteration</h3>
<p>Med-PaLM 2, introduced at Google Health’s annual event, The Check Up, in March 2023, was the first to reach human expert level on answering USMLE-style questions. According to physicians, the model's long-form answers to consumer medical questions improved substantially. <a href="https://cloud.google.com/blog/topics/healthcare-life-sciences/sharing-google-med-palm-2-medical-large-language-model">Read more</a>.</p>
<h2>AI in Ultrasound Image Interpretation</h2>
<h3>Bridging the Gap in Maternal Care</h3>
<p>In recent years, sensor technology has evolved to make ultrasound devices more affordable and portable. However, they often require experts with years of experience to conduct exams and interpret the images. To help bridge this divide, Google is building AI models that can help simplify acquiring and interpreting ultrasound images to identify important information like gestational age in expecting mothers and early detection of breast cancer. <a href="https://health.google/health-research/">Learn more</a>.</p>
<h3>Partnerships for Real-World Applications</h3>
<p>Google is partnering with Jacaranda Health, a Kenya-based nonprofit focused on improving health outcomes for mothers and babies in government hospitals, to research digital solutions that can help them reach their goal. In Sub-Saharan Africa, maternal mortality remains high, and there is a shortage of workers trained to operate traditional high-cost ultrasound machines. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10301994/">Read more</a>.</p>
<h2>Enhancing Radiotherapy Planning with AI</h2>
<h3>Collaboration with Mayo Clinic</h3>
<p>Over the past three years, Google has partnered with Mayo Clinic to explore how AI can support the tedious, time-consuming process of planning for radiotherapy, a common cancer treatment used to treat more than half of cancers in the U.S. The most labor-intensive step in the planning process is a technique called “contouring”, where clinicians draw lines on CT scans to separate areas of cancer from nearby healthy tissues that can be damaged by radiation during treatment. <a href="https://arxiv.org/html/2401.05654v1">Learn more</a>.</p>
<h3>Future Research and Development</h3>
<p>Google will soon publish research about the findings of their study and the radiotherapy model they developed. As of today, they are formalizing their agreement with Mayo Clinic to explore further research, model development, and commercialization. <a href="https://ai.google/discover/healthai/">Read more</a>.</p>
<h2>AI for Tuberculosis Screening</h2>
<h3>Addressing Global Health Challenges</h3>
<p>Building on years of health AI research, Google is working with partners on the ground to bring the results of their research on tuberculosis (TB) AI-powered chest x-ray screening into the care setting. According to the WHO, TB is the ninth leading cause of death worldwide, with over 25% of TB deaths occurring in Africa. <a href="https://www.linkedin.com/posts/google-health_announcing-our-newest-health-ai-launch-activity-7140719681690099713-fW-E">Learn more</a>.</p>
<h3>Real-World Implementation</h3>
<p>While TB is treatable, it requires cost-effective screening solutions to help catch the disease early and reduce community spread. Google's AI models aim to provide these solutions, making a significant impact on global health. <a href="https://medium.com/meta-multiomics/pushing-the-limits-googles-med-palm-2-is-transforming-ai-in-healthcare-3ab1d0063866">Read more</a>.</p>
<h2>Conclusion</h2>
<p>Google's advancements in AI for healthcare, particularly through their Med-PaLM models, are paving the way for more accurate, efficient, and accessible medical care. From improving maternal care and cancer treatments to addressing global health challenges like tuberculosis, Google's AI research is making a significant impact on the healthcare industry. <a href="https://en.wikipedia.org/wiki/Artificial_intelligence_in_healthcare">Learn more</a>.</p> | aishikl | |
1,868,941 | Crypto Payment Gateway: Easy Steps for Success | In this era of digital payments, businesses are increasingly exploring the opportunities in crypto... | 0 | 2024-05-29T12:04:23 | https://dev.to/rahulsukhwal/crypto-payment-gateway-easy-steps-for-success-1h69 | cryptopaymentgatewaysoftware, cryptopaymentgatewayservice | In this era of digital payments, businesses are increasingly exploring the opportunities in crypto payment gateways. Blockchain is a blend of trust, security, and transparency. Beyond the hype, blockchain technology has grown and transformed number of industries, such as supply chain, healthcare, and finance. Additionally, this technology has been incorporated into a wide range of payment industries.
Companies like Microsoft, Tesla, Starbucks, etc. also accept payments in cryptos in return for their services. It is very evident that crypto payment methods will have a huge influence in the future. So [crypto payment gateway development](https://www.jploft.com/cryptocurrency-payment-gateway-development) now might be one of the best options to thrive in a competitive future.
## Step-by-Step Guide To Crypto Payment Gateway Development
### STEP 1 – Analyze the market
Doing basic research on current market trends before starting the project can be highly beneficial. Learn about the workings of these payment gateways and what features the competitors offer. Conduct a survey and identify your target audience to know about the features that are high in demand. This will help your to create a solid plan of action for your project.
### STEP 2 – Drafting a plan
Once you have studied the entire market, it's time to plan your business accordingly. Create a road map outlining how you will market your project. Decide the target country or region and plan business tactics that will attract them to your business. Complete the legal and audit procedures beforehand. From app development to deployment, prepare a plan that will all the points related to the [Crypto Payment Gateway Software Development](https://www.jploft.com/cryptocurrency-payment-gateway-development) project
### STEP 3 – Adding Unique Features
Adding unique and innovative features makes your platform stand out among other platforms. This is a crucial step in crypto payment gateway development. By adding unique and easy-to-use features to your platform, you may differentiate it from competitors and improve the reputation of your business. Examine the upcoming features from the admin and user perspectives to determine which features are absolutely necessary to incorporate into your cryptocurrency payment gateway platform.
### STEP 4 – Defining Tech Stack
Selecting and implementing a tech stack that fits your business requirements is one significant stage when building a crypto payment gateway. Determining a tech stack involves choosing a blockchain network, programming languages and third-party tools you want to integrate into your platform. Choosing a framework that aligns with the selected language is also necessary when choosing a tech stack.
### STEP 5 – Deciding on Security & UI Features
The success of any platform mainly depends on two factors, which are its security features and its user-friendly features. Choosing a reliable crypto payment gateway company that can offer good security features will attract many users. Users prefer security features like two-factor authentication, encryption, etc. that ensure their data safety. Create a payment gateway with a sleek interface and design that attracts users and boosts brand reputation.
### STEP 6- Engage in Frequent Testing
Testing your project at each step of the development process helps ensure a seamless experience for both admin and users. Testing the project frequently at different devices under different cases and scenarios helps in bug detection. A bug-free platform will enhance the user experience.
### STEP 7 – Successful Deployment
After completing all these steps, you can now launch your crypto payment gateway successfully. This is a complicated task, as launching the platform properly impacts its success rate. Hire crypto payment gateway developers who can launch your platform smoothly without any technical glitches. Deployment of your payment gateway with some marketing strategies may give you the best results.
A crypto payment gateway development company must know the above-mentioned strategies and stages. Selecting a good company that offers a wide range of crypto payment gateway development services, serves as a foundational step for creation. Before building a crypto payment gateway, one should be aware of a few other unique benefits. Below, let's examine and talk about those advantageous aspects in more detail.
## Benefits of Cryptocurrency Payments in E-Commerce
### Global Reach and Borderless Transactions
Crypto charge gateways enable companies to attain a global target market without the constraints of traditional banking systems. Transactions are borderless, allowing merchants to simply accept payments from clients around the world without currency conversion troubles.
### Lower Transaction Fees
Compared to traditional price processors, crypto transactions frequently include lower expenses, decreasing the overall fee for businesses. This cost-effectiveness is especially high quality for companies with worldwide clients. The cross-border fees can be substantial, making crypto payment gateway integration a suitable choice.
### Enhanced Security and Fraud Prevention
Blockchain technology offers high security and safety. Transactions are cryptographically secure and irreversed, decreasing the threat of fraud and chargebacks for traders.
### Faster Settlements
Traditional banking structures can take days for settlements to arise, impacting cash glide for corporations. Crypto price gateways provide close-to-instant settlements, providing agencies with faster right of entry to funds.
### Programs for Loyalty and Rewards
Businesses can design their own tokens with cryptocurrency, and these tokens can be utilized in loyalty and rewards schemes. This innovation can provide a distinct competitive edge by improving consumer engagement and retention.
### Promotion and Marketing
Emphasize in your marketing materials and on your website that Bitcoin payments are available. Stress the advantages that clients can have by selecting this cutting-edge mode of payment.
### Observance and Guidelines
Investigate local laws and regulations pertaining to Bitcoin transactions and abide by them. Maintaining compliance is critical to developing a long-lasting e-commerce business.
Benefits for Consumers
### Privacy and Anonymity
Cryptocurrency transactions offer degree of privacy and anonymity not available with conventional payment methods. Customers can make purchases without sharing sensitive financial details.
### Reduced Fees for International Transactions
For clients making global purchases, crypto payments frequently incur decreased fees as compared to standard banking channels. This savings gain encourages cross-border transactions and fosters a more interconnected worldwide market.
### Inclusivity and Accessibility
Many people around the world do not have access to standard banking services. Crypto payment gateway development empowers people who are unbanked or underbanked to participate in ecommerce. Access to economic services is extended to areas where traditional banking infrastructure is lacking.
## The Impact on Global E-Commerce
### Expansion of Market Reach
1. Businesses using [crypto payment gateway development services](https://www.jploft.com/cryptocurrency-payment-gateway-development) are entering new markets and regions that were previously not accessible due to financial barriers.
2. This expansion of market reach contributes to the globalization of e-commerce.
### Disrupting Traditional Financial Systems
1. Crypto payment gateways oppose the dominance of traditional monetary institutions by providing an alternative approach to conducting transactions.
2. This disruption fosters innovation and competition within the monetary sector, generally benefiting customers.
### Driving the Adoption of Cryptocurrencies
1. Consumers end up extra familiar and comfortable with using cryptocurrencies for normal transactions.
2. The integration of cryptocurrencies into e-commerce via payment gateways quickens the mainstream adoption of digital assets.
## Conclusion
For companies of all sizes, we think the advice offered in this blog post on a crypto payment gateway development is priceless. With this understanding, you can launch into the development process with assurance and produce a payment gateway that propels your company's expansion and innovation.
To get the best results, consider partnering with an experienced crypto payment gateway development company with a dedicated development team. By teaming with industry experts, you can make your journey towards achieving success in this crypto industry less stressful.
| rahulsukhwal |
1,868,939 | Top Trends To Explore in Music Streaming App Development | Everything is going online as people have shifted from television to OTT platforms today; every... | 0 | 2024-05-29T12:02:32 | https://dev.to/rahulsukhwal/top-trends-to-explore-in-music-streaming-app-development-3dm3 | musicstreamingappdevelopment, musicstreamingappsolutions | Everything is going online as people have shifted from television to OTT platforms today; every platform is streaming on streaming media and can be found everywhere around the globe. New startups are highly influenced by new industrial concepts; however, there is plenty of room for different streaming services. The fastest growing streaming industry is the [Music Streaming apps development](https://www.jploft.com/music-streaming-app-development) industry, and one needs an appropriate and detailed strategy with highly skilled development staff to build a music app. One of the main reasons for the growth and popularity of music apps and for this youth generation is that it's essential to connect music, even on low bandwidth. Music streaming apps have hit the choices and preferences of people to listen and vibe on the music they listen to. Increasing artificial intelligence tools with a high level of advancement has affected and enhanced music streaming services by enabling customised recommendations and creating unique content.
The music industry is one of the most profitable industries in the billions and is primarily competitive in the tech world. The competition is challenging because of a different and unique set of features. Every development process has unique and holistic approaches regarding investment and pricing models, SWOT analysis, and USPs. Here is the reason why mobile app development is so prevalent in terms of music streaming: it has become popular enough. It hence has plenty of opportunities for more companies to develop their online music brand and create a brand image in the market.
## Critical Features to Focus to Develop Music Streaming Apps
Music streaming platforms have ideal features that should be considered when developing music streaming apps to make them more engaging and attract potential users. Here is a list of the most demanding features in music streaming apps:-
## Registration and Authentication
Getting yourself registered and getting authorised are fundamental features that give users security and can have a highly personalised, user-friendly experience. It's easy to create an account using email, social media, or mobile number and get One Time Pin to log in.
## Secure Payment Integration
Music streaming apps have a premium feature to monitor the app and for proper procedure integrating secure payment gateways, which is crucial for enabling the premium plans that are in-built. Secure payment integration helps provide a smooth payment process for enjoying non-streaming fun.
## Creating Playlist
Favourite music and vibing hits differently when it comes to refreshing moods. Having a favourite list of songs is wholesome. Creating a super playlist is to have freedom from music streaming apps to curate their music collections. This feature allows users to create a playlist of their choice and preference, which is personally accessible from anywhere and anytime.
## Download Options
The option of going offline and listening to your playlist is something like disappearing from the world for a few hours, and users usually enjoy non-stop music at any time they want, especially when they are on a date and on a drive.
## Music Search and Filter
Users can search and filter their most desired songs with the search feature quickly. In contrast, intelligent algorithms allow users to explore the different genres of music catalogues effortlessly and within no time.
## Music Streaming App Development Services– Legal Requirement
Before getting into any business, get all the research done about the industry making profits, growth, losses, and consequences. App development service providers work on several projects and look upon the legal procedures to be followed without getting stuck in problems. It is essential to understand the circumstances in the market with a dynamic approach and to have a purpose in starting the development process for music streaming apps. It's important to understand the need for licensing, avoid copyright issues for the betterment of the music app, and create a safe space for itself in the market.
Music streaming apps routinely collect and handle user data, including private information like email addresses, payment details, and listening preferences. Highly skilld and smart developers must follow privacy and data protection laws, such as the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) in the US. Obtaining users' consent prior to data collection and implementing strong data security protocols are necessary to protect user information. To prevent the dangers and litigation associated with the application as well as the business in the market, it is imperative to adhere to the appropriate set of rules and regulations. Legal experts should be part of the development process to have invaluable guidance for what is right or wrong in the proper sense of business, as this helps in the app's integrity as well as longevity.
Latest and Emerging Trends in Music Streaming App Development
The development and the phase of music streaming apps are booming and blasting in the industry with ongoing advancements, new development procedures, and new updated versions of features. This can be done by hiring mobile app developers. In order to succeed in this cutthroat environment, developers must keep abreast of the most recent developments affecting the sector as well as the targeted audience (including artists). In the process of creating music streaming apps, consider the following significant trends:
## Personalization and Recommendation Systems
In today's music streaming applications, personalization is essential. Consumers anticipate personalised recommendations that take into account context, interests, and listening patterns. By incorporating sophisticated and trendy updates and recommendation algorithms driven by AI and machine learning, users may benefit from curated playlists, customised radio stations, and intelligent recommendations based on location, mood, and time of day.
## High-Quality Audio Streaming
Users want better sound quality when streaming since high-fidelity audio formats like FLAC and Hi-Res Audio are becoming more and more popular. Including the capability to stream high-quality audio can draw in music lovers and audiophiles who value sound quality as the main feature. To provide high-end audio experiences, developers should consider collaborating with manufacturers of audio equipment and applying lossless audio compression techniques as well as other technical techniques.
## Collaborative and Social Integration Features
Features that promote community interaction and strengthen the social side of music streaming include social media sharing, group listening sessions, and collaborative playlists. To enable users to share their favourite songs, playlists, and activities on Facebook, Twitter, and Instagram, by hiring mobile app developers can incorporate social media APIs. Users may also make and share playlists with peers using collaborative playlist capabilities, which makes it easier for friends to discover music together.
## Voice Control and Smart Assistants
Users can operate hands-free and enjoy convenience with voice-controlled music streaming. By using voice recognition technology, users may use voice commands to navigate the app, control playing, and search for music. Moreover, integration with well-known intelligent assistants like Apple Siri, Google Assistant, and Amazon Alexa allows users to effortlessly manage and enjoy music playback across platforms and devices with the help of music streaming app development services
## Conclusion
Summing up, developing competitive and cultivating user experiences by being the centre of attention requires keeping up with the most recent developments in music streaming app development. Developers can create creative and feature-rich music streaming apps with latest features that meet users' changing needs and preferences by incorporating personalised recommendations, immersive audio technologies, voice control, social features, blockchain solutions, offline listening options, and wearable device integration be the new brand that everybody wants and be the first choice.
| rahulsukhwal |
1,868,938 | Navigating the Labyrinth of Healthcare: The Human Touch in the Digital Era | In an age described with the aid of technological marvels and digital solutions, the healthcare... | 0 | 2024-05-29T12:01:53 | https://dev.to/liong/navigating-the-labyrinth-of-healthcare-the-human-touch-in-the-digital-era-4oik | healthcare, management, malaysia, humans | In an age described with the aid of technological marvels and digital solutions, the healthcare industry stands on the intersection of innovation and humanity. Amidst the proliferation of Clinical Management Systems (CMS) and electronic fitness information, it is imperative to recognize the essential position of human connection in healthcare transport. While these systems surely enhance efficiency and streamline processes, they should be included thoughtfully to hold the essence of compassionate care.
**The Rise of Clinical Management Systems**
[Clinical Management Systems](https://ithubtechnologies.com/clinical-management-system/?utm_source=dev.to%2F&utm_campaign=ClinicalManagementSystems&utm_id=Offpageseo+2024) have undeniably converted the landscape of healthcare administration. From appointment scheduling to digital health report control, those virtual structures have revolutionized the manner scientific centers perform. With functions like automatic billing, actual-time facts access, and decision support tools, CMS platforms promise remarkable performance and accuracy in healthcare delivery.
**The Pitfalls of Over-Reliance on Technology**
However, amidst the attraction of technological innovation, it is important to renowned the potential pitfalls of over-reliance on Clinical Management Systems. While those structures excel in streamlining administrative duties and optimizing workflows, they must not overshadow the importance of human interaction in healthcare. Patients are not mere records points or entries in a virtual database; they're people with unique needs, fears, and emotions that require customized interest and care.
**The Human Element in Healthcare**
At the coronary heart of healthcare shipping lies the human element—the empathetic touch of a caregiver, the reassuring smile of a nurse, and the comforting presence of a health practitioner. These intangible elements of care can not be replicated by using even the most sophisticated Clinical Management System. While generation can facilitate conversation and data control, it can't update the human connection that is essential for healing and properly-being.
**Integrating Technology with Compassion**
The mission, therefore, lies in placing a sensitive stability among technological advancement and human-focused care. Clinical Management Systems should be viewed now not as substitutes for human interaction but as equipment to beautify the shipping of compassionate care. By integrating technology with empathy, healthcare companies can harness the full capacity of CMS structures whilst maintaining the human contact that distinguishes healthcare from different industries.
**Fostering Patient-Centered Care**
Central to this paradigm shift is the idea of patient-centered care, which prioritizes the character needs and choices of patients. While Clinical Management Systems excel in statistics management and method optimization, they need to be designed with a deep expertise of the human enjoy of illness and healing. User-centric design concepts should manual the improvement of CMS structures, making sure that they empower healthcare vendors to deliver care that is genuinely personalized and compassionate.
**Cultivating Empathy in Healthcare Providers**
In addition to technological solutions, cultivating empathy and compassion among healthcare carriers is crucial for turning in patient-centered care. Medical schooling curricula have to emphasize the importance of communication competencies, cultural competence, and emotional intelligence along scientific understanding. By fostering a lifestyle of empathy and humanism inside healthcare institutions, we will make sure that the human touch stays at the leading edge of patient care.
**Embracing Innovation without Losing Sight of Humanity**
In end, the appearance of Clinical Management Systems represents a extensive milestone within the evolution of healthcare delivery. These virtual structures hold first-rate potential to improve efficiency, accuracy, and effects in scientific practice. However, as we embody technological innovation, we should now not lose sight of the essence of healthcare—the human connection between sufferers and carriers.
**Striking a Balance**
The integration of Clinical Management Systems ought to be approached with mindfulness and intentionality, ensuring that they supplement instead of overshadow the human detail of care. By putting a sensitive balance between technology and compassion, we are able to navigate the labyrinth of healthcare delivery with empathy, humanity, and unwavering commitment to the well-being of our sufferers.
## Conclusion
In the evolving panorama of healthcare, Clinical Management Systems offer remarkable performance, but their integration have to preserve the human contact. While those systems streamline tactics, they must decorate, not overshadow, the compassionate care furnished by healthcare professionals. Thus, as we embody technological innovation, allow us to bear in mind that at the back of every facts factor lies a patient in need of empathy and know-how. Balancing era with humanity ensures that healthcare remains a holistic enterprise, enriching both the affected person enjoy and the exercise of drugs.
| liong |
1,868,937 | OKE: Simple Steps | Create an OKE Cluster in OCI, with default settings Create kubeconfig file in your connecting... | 0 | 2024-05-29T12:00:47 | https://dev.to/paihari/oke-simple-steps-2e9 | - Create an OKE Cluster in OCI, with default settings

- Create kubeconfig file in your connecting machine.
```
oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.eu-zurich-1.aaaaaaaa5s4il4pmzpg274oahr4z2at5do3ws7pemss3svdp2cbaobenjnoa --file $HOME/.kube/config --region eu-zurich-1 --token-version 2.0.0 --kube-endpoint PUBLIC_ENDPOINT
```
`kubectl create -f https://k8s.io/examples/application/deployment.yaml
`
`kubectl get pods`
| paihari | |
1,868,678 | MDB Angular Version 6.1.0 released! | Version 6.1.0, released 27.05.2024 Fixes: Multi range Fixed problem with... | 0 | 2024-05-29T12:00:00 | https://dev.to/keepcoding/mdb-angular-version-610-released-jae | news, angular, bootstrap, css | ## Version 6.1.0, released 27.05.2024
## Fixes:
**Multi range**
- Fixed problem with thumb limiting logic when using custom step
- Fixed problem with updating thumb positions via form controls
**Popconfirm** - added focus trap
**Autocomplete** - restored native shift + home and shift + end keys behavior (open/close dropdown)
**Select** - added support for opening and closing dropdown with alt + arrow-up and alt + arrow-down keys
## New features:
**Table pagination** - added new page input that allows to set page number
**Multi range** - added new highlightRange input that allows to highlight range
**Parallax** - added new container input that allows to set wrapper element for parallax effect
**[MDB Angular Version 6.1.0](https://mdbootstrap.com/docs/angular/)** | keepcoding |
1,868,936 | RabbitMQ Upgrade Best Practices | Some best practices for upgrading RabbitMQ | 0 | 2024-05-29T11:59:51 | https://dev.to/stoft/rabbitmq-upgrade-best-practices-1i82 | rabbitmq | ---
title: RabbitMQ Upgrade Best Practices
published: true
description: Some best practices for upgrading RabbitMQ
tags: #RabbitMQ
# cover_image: https://images.unsplash.com/photo-1564650211163-21049f1b683a
# Use a ratio of 100:42 for best results.
# published_at: 2024-05-29 07:58 +0000
---
Some simple best practices around performing [RabbitMQ](https://www.rabbitmq.com) upgrades.
### Testing a New Version
* Test the upgrade process.
* Test the new version with your services/clients.
* If doing rolling upgrades, test your services with a mixed cluster using the versions you will upgrade from/to.
* Test rolling back.
### Rolling Upgrade
Adjust according to your setup.
Per node checklist:
- [ ] Assume roughly 5mins per node.
- [ ] Ensure the node is not just visible in the dashboard, but also accessible via e.g. DNS names and similar.
- [ ] Allow the cluster to stabilise with regards to replication, queue levels and similar.
- [ ] Ensure the underlying (cloud) platform reports a steady state.
- [ ] Ensure that clients can connect to the node and communicate.
⚠️ Rollback always takes longer than upgrade.
### Release Windows
* One release per window. If you need to do several upgrades, schedule several windows.
| stoft |
1,868,935 | Cheaper Software | Unlock your productivity with cheaper software solutions! Save big without compromising quality.... | 0 | 2024-05-29T11:59:18 | https://dev.to/cdrbsoftwares/cheaper-software-1h47 | Unlock your productivity with cheaper software solutions! Save big without compromising quality. Explore now for affordable options.
https://www.cdrbsoftwares.com/ | cdrbsoftwares | |
1,867,375 | Setting Up An Angular Project | Table of Contents Introduction Prerequisites Install Angular CLI Create a New Angular... | 0 | 2024-05-29T11:58:05 | https://dev.to/reliable25/setting-up-an-angular-project-383n | angular, frontend, html, typescript | ## Table of Contents
- [Introduction] (#intro)
- [Prerequisites] (#pre-requisites)
- [Install Angular CLI] (#angular-cli)
- [Create a New Angular Project] (#angular-project)
- [Go to Your Project Directory] (#project-directory)
- [Project Structure Overview] (#project-structure)
- [Building the Project for Production] (#prod)
- [Conclusion] (#conclusion)
<a id= "intro"></a>
## Introduction
Angular is a popular front-end framework for building dynamic web applications. It uses typescript by default for creating logics and methods for a class but the browser doesn’t know typescript.
Setting up an Angular project from scratch can be a task, especially for beginners. here, the Angular Command Line Interface (CLI) comes into picture, serving as a powerful tool to streamline the project setup process and enhance developer productivity. Webpack is used to compile these typescript files to JavaScript. In addition, there are so many configuration files you will need to run an angular project on your computer.
<a id= "pre-requisites"></a>
## Prerequisites
Before you begin, make sure you have the following installed:
- **Node.js:**
Angular requires Node.js for the development environment. You can download and install it from [nodejs website](https://nodejs.org/en).
- **npm** (Node Package Manager): It comes with Node.js, so if you have Node.js installed, you should have npm as well.
You can verify the installation by running the following commands in your terminal:
```bash
node --version
npm --version
```
<a id= "angular-cli"></a>
## Install Angular CLI
Angular CLI (Command Line Interface) is a powerful tool that helps you create and manage Angular projects. Install it globally using npm:
```bash
npm install -g @angular/cli
```
<a id= "angular-project"></a>
## Create a New Angular Project
Use Angular CLI to create a new project. Open your terminal and run:
```bash
ng new my-angular-app
```
- Replace `my-angular-app` with your desired project name.
- You'll be prompted to answer a few questions about your project setup, such as whether to include Angular routing and which stylesheets format to use (CSS, SCSS, etc.).

<a id= "project-directory"></a>
**4. Go to Your Project Directory**
After the project is created, navigate into the project directory:
```bash
cd my-angular-app
```
**Run server and see your application in action**
```bash
ng serve
```
- By default, the application will be served at http://localhost:4200/.
- Open your web browser and navigate to http://localhost:4200/ to see your new Angular application running.

**Note:** If you need to change the port, for example, because another application is using the default port, you can run:
```bash
ng serve --port desiredPort
```
For instance:
```bash
ng serve --port 3200
```
This will serve the application on http://localhost:3200/.
<a id= "project-structure"></a>
## Project Structure Overview
Here's a brief overview of the main folders and files in your Angular project:

**- node_modules/:** Contains all the npm packages installed for your project.
**- src/:** Contains the source code of your application.
app/: Main app directory where your components, services, and other parts of your application reside.
**- app.component.*:** Main component files (HTML, CSS, TypeScript, and spec file for testing).
**- assets/:** For static assets like images.
**- environments/:** For environment-specific configurations.
When you create a new Angular project, the `src/environments/` directory contains two files by default:
- environment.ts: Used for development settings.
```typescript
export const environment = {
production: false,
apiUrl: 'http://localhost:3000/api',
};
```
- environment.prod.ts: Used for production settings.
```typescript
export const environment = {
production: true,
apiUrl: 'https://api.myapp.com',
};
```
In these examples:
- production: A boolean indicating if the environment is production.
- apiUrl: The base URL for API requests.
**- angular.json:** Configuration file for Angular CLI.
**- package.json:** This file stores the information about the libraries added and used in the project with their specified version installed. Whenever a new library is added to the project it’s name and version is added to the dependencies in package.json.
**Modifying the App Component**
You can start modifying the default component to get a feel of how Angular works. Open `src/app/app.component.html` and change the content to:
```html
<h1>Welcome to My Angular App!</h1>
<div>
<p>
This is my First Angular app.
</p>
</div>
```
Save the file, and your changes should automatically be reflected in the browser.

**Adding a New Component**
Create a new component using Angular CLI. For example, to create a `hello-world component` :
```bash
ng generate component hello-world
```
This command will create a new folder hello-world in the `src/app` directory with the component files.
Inside the hello-world folder, Angular CLI generates four files:
- hello-world.component.css: The CSS file containing the component's styles.
- hello-world.component.html: The HTML file containing the component's template.
- hello-world.component.spec.ts: The TypeScript file for unit testing the component.
- hello-world.component.ts: The TypeScript file containing the component class.
**Using the New Component**
To use the new hello-world component, add its selector tag to `app.component.html`:
```html
<h1>Welcome to My Angular App!</h1>
<div>
<p>
This is my First Angular app.
</p>
</div>
<app-hello-world></app-hello-world>
```
Open `src/app/hello-world/hello-world.component.html` and add some content to it, like:
```html
<p>
Hello, World!.
</p>
```
Save the files and see the changes in your browser.

<a id='prod'></a>
## Building the Project for Production
To build the project for production, run:
```bash
ng build --prod
```
This will create a `dist/` directory with the production-ready files.

Building your Angular project for production with `ng build --prod` prepares your application for deployment
<a id='conclusion'></a>
## Conclusion
In this article, you've successfully set up an Angular project and created a new component. From here, you can start building more complex features and exploring Angular's capabilities. Whether you're adding new components, integrating services, or enhancing your application's performance, Angular provides a robust framework to help you achieve your goals.
Here is the [GitHub repository](https://github.com/Reliable25/New-Angular-App) for this article in case you need to check it out. Lastly, if you have found value in this article, please consider sharing it with your peers who may also benefit from it.
What are your thoughts on the topic "Setting Up An Angular Project"? Feel free to share your thoughts in the comments section below.
Happy coding! | reliable25 |
1,868,940 | Full Guide on Salesforce Dynamics CRM Integration for Professional Services | Introduction The integration of Salesforce and Microsoft Dynamics CRM offers professional... | 0 | 2024-05-29T12:06:39 | https://www.sfapps.info/salesforce-dynamics-crm-integration-for-professional-services/ | blog, industries | ---
title: Full Guide on Salesforce Dynamics CRM Integration for Professional Services
published: true
date: 2024-05-29 11:57:22 UTC
tags: Blog,Industries
canonical_url: https://www.sfapps.info/salesforce-dynamics-crm-integration-for-professional-services/
---
## Introduction
The integration of [Salesforce](https://www.salesforce.com/eu/) and [Microsoft Dynamics CRM](https://www.microsoft.com/de-de/dynamics-365) offers professional services a robust framework for managing client relationships, streamlining operations, and enhancing overall productivity. Combining the capabilities of Salesforce, a leader in customer relationship management (CRM), with Dynamics CRM from Microsoft allows businesses to leverage the strengths of both platforms. This integration can transform how professional services operate, enabling better data management, seamless communication, and improved customer insights.
**The Benefits of Salesforce Dynamics CRM Integration**
The integration of Salesforce with Dynamics CRM brings numerous benefits to professional services organizations, like [professional services automation Salesforce](https://www.sfapps.info/salesforce-implementation-for-professional-services/). Here are some key advantages:
**Unified Customer View:** By integrating these two powerful CRM systems, businesses can achieve a single, unified view of their customers. This holistic perspective helps in understanding customer behavior, preferences, and history, leading to more personalized and effective service delivery.
**Improved Data Accuracy:** Data synchronization between Salesforce and Dynamics CRM ensures that all information is up-to-date and accurate across platforms. This reduces the risk of data duplication and errors, which can significantly impact business operations and decision-making.
**Enhanced Productivity:** Automation of routine tasks and streamlined workflows are major benefits of this integration. By reducing manual data entry and improving information flow, employees can focus on more strategic tasks that drive business growth.
**Better Reporting and Analytics:** Combining data from both Salesforce and Dynamics CRM enables richer reporting and analytics capabilities. Businesses can generate comprehensive reports that provide deeper insights into performance metrics and customer trends.
**Scalability and Flexibility:** The integration allows businesses to scale their operations seamlessly as they grow. It provides the flexibility to adapt to changing business needs and integrates with other systems and tools used within the organization.
**Enhanced Customer Experience:** With a unified CRM system, customer-facing teams have access to all relevant information, enabling them to deliver a more cohesive and satisfactory customer experience.

## Microsoft Dynamics CRM
[Microsoft Dynamics CRM](https://www.microsoft.com/en-us/dynamics-365) (Customer Relationship Management) is a comprehensive business solution designed to manage customer relationships, streamline processes, and improve profitability for organizations of all sizes. It integrates with other Microsoft services and products, providing a unified approach to managing various business functions. Here’s an in-depth look at what makes Microsoft Dynamics CRM a powerful tool for professional services.
### Core Features of Microsoft Dynamics CRM
**1. Sales Module**
- **Opportunity Management:** Track sales opportunities from lead generation to closing deals.
- **Sales Forecasting:** Use advanced analytics to predict sales trends and outcomes.
- **Customer Insights:** Gain a 360-degree view of customers to personalize sales interactions.
**2. Customer Service Module**
- **Case Management:** Manage and resolve customer service issues efficiently.
- **Knowledge Base:** Provide agents with access to a repository of solutions and best practices.
- **Service Scheduling:** Optimize service schedules to improve customer satisfaction and resource utilization.
**3. Marketing Module**
- **Campaign Management:** Plan, execute, and analyze marketing campaigns across multiple channels.
- **Lead Nurturing:** Automate leads nurturing processes to convert prospects into customers.
- **Event Management:** Organize and manage events to engage customers and prospects.
**4. Field Service Module**
- **Work Order Management:** Schedule and manage work orders for field service technicians.
- **Resource Optimization:** Allocate resources efficiently based on skills, availability, and location.
- **Mobile Access:** Enable field technicians to access information and update statuses on the go.
**5. Project Service Automation (PSA)**
- **Project Planning:** Plan and manage projects with detailed timelines and resource allocation.
- **Time and Expense Management:** Track time and expenses accurately for billing and analysis.
- **Collaboration Tools:** Facilitate collaboration among project teams with integrated communication tools.
### Benefits for Professional Services
**Enhanced Customer Relationship Management**
Microsoft Dynamics CRM offers a centralized platform to manage all customer interactions, ensuring that professional services firms can maintain strong relationships with their clients. This leads to increased customer satisfaction and loyalty.
**Streamlined Business Processes**
By automating routine tasks and integrating various business functions, Dynamics CRM helps streamline operations. This not only reduces manual effort but also improves efficiency and accuracy across the organization.
**Improved Decision Making**
The advanced analytics and reporting features of Dynamics CRM provide actionable insights that help in making informed business decisions. This is particularly beneficial for professional services firms that rely on data-driven strategies.
**Scalability and Flexibility**
Microsoft Dynamics CRM is highly scalable and can be customized to meet the specific needs of different professional services firms. Whether you are a small business or a large enterprise, [hire Dynamics 365 developers](https://www.hiredynamicsdevelopers.com/), because Dynamics CRM can grow with your business.
**Seamless Integration**
Dynamics CRM seamlessly integrates with other Microsoft products such as Office 365, Azure, and Power BI. This ensures that businesses can leverage their existing Microsoft investments while enhancing their CRM capabilities.
Looking for professional help with Salesforce & Dynamics CRM integration for Professional Services?
Get in touch with our parent company!
[Explore More](https://mobilunity.com/tech/hire-salesforce-developers/)
[](https://www.sfapps.info/wp-content/uploads/2024/05/banner-2-icon.svg)
## Salesforce CRM
Salesforce is a leading cloud-based customer relationship management (CRM) platform that provides a comprehensive suite of tools for managing customer interactions, sales processes, and business operations. Widely used across various industries, Salesforce is renowned for its flexibility, scalability, and extensive ecosystem of applications and integrations. Here’s an in-depth look at what makes Salesforce a powerful tool for professional services.
### Core Features of Salesforce
**1. Sales Cloud**
- **Lead Management:** Track and manage leads through every stage of the sales pipeline.
- **Opportunity Management:** Monitor opportunities with detailed tracking and forecasting tools.
- **Sales Forecasting:** Utilize AI-driven analytics to predict sales trends and outcomes.
**2. Service Cloud**
- **Case Management:** Handle customer service cases efficiently with automated workflows and comprehensive case histories.
- **Knowledge Base:** Provide agents with access to a repository of solutions and best practices.
- **Omni-Channel Routing:** Direct customer inquiries to the right agents based on skills and availability.
**3. Marketing Cloud**
- **Email Marketing:** Design and execute personalized email campaigns.
- **Social Media Management:** Engage with customers across social media platforms.
- **Customer Journey Mapping:** Plan and optimize the customer journey across various touchpoints.
**4. Commerce Cloud**
- **E-Commerce:** Manage online stores with robust e-commerce capabilities.
- **Personalized Shopping Experiences:** Use AI to tailor shopping experiences to individual customers.
- **Order Management:** Handle orders from multiple channels in a unified system.
**5. Analytics Cloud**
- **Data Visualization:** Create interactive dashboards and reports.
- **Mobile Accessibility:** Access analytics on the go with mobile-friendly interfaces.
- **AI-Powered Insights:** Leverage Einstein Analytics for predictive insights and advanced analytics.
**6. Salesforce Platform**
- **App Development:** Build custom applications with the Lightning Platform.
- **Integration:** Connect Salesforce with other systems using robust APIs and pre-built connectors.
- **Security:** Ensure data security with advanced security features and compliance with industry standards.
### Insight:
For professional services firms focusing on B2B marketing, Salesforce’s Pardot or [Marketing Cloud Account Engagement](https://www.salesforce.com/eu/marketing/b2b-automation/) offers robust marketing automation capabilities. It helps in generating leads, nurturing them through personalized campaigns, and tracking ROI. [Hire Pardot consultant](https://www.sfapps.info/salesforce-pardot-consultant-for-professional-services/) and increase your marketing effectiveness.
### Benefits for Professional Services
**Enhanced Client Management**
Salesforce provides a unified view of each client, encompassing all interactions, transactions, and preferences. This enables professional services firms to deliver highly personalized and efficient services.
**Streamlined Operations**
Automation of routine tasks and integration with other business systems reduce manual effort and improve operational efficiency. Salesforce’s robust workflows and process automation tools are particularly beneficial in this regard.
**Improved Collaboration**
With tools like Chatter, Salesforce enhances collaboration among team members. Real-time communication and document sharing ensure that everyone stays on the same page, leading to more coordinated efforts and better service delivery.
**Scalability and Customization**
Salesforce’s modular architecture allows businesses to scale their CRM capabilities as they grow. The platform can be customized extensively to meet the specific needs of professional services firms, from tailored dashboards to custom applications.
**Data-Driven Insights**
The analytics capabilities of Salesforce provide deep insights into business performance and customer behavior. Professional services firms can leverage these insights to make informed decisions, optimize their strategies, and anticipate client needs.
**Integration Capabilities**
Salesforce integrates seamlessly with a wide range of third-party applications and systems, including Microsoft Dynamics CRM. This enables businesses to create a cohesive ecosystem where data flows seamlessly across platforms, enhancing overall productivity and data accuracy.
## Integration Process Step by Step
Integrating Salesforce with Dynamics CRM involves several critical steps to ensure a smooth and efficient process. Here, we’ll break down the integration process into manageable phases, incorporating the latest trends and best practices for 2024.
### 1. Define Objectives and Scope
Before diving into the technical aspects, it’s crucial to define the objectives and scope of the project:
- **Identifying Key Stakeholders:** Involve all relevant departments, including IT, sales, marketing, and customer service, ensuring everyone’s needs are considered.
- **Setting Clear Goals:** Outline specific outcomes, such as improved data accuracy, enhanced customer insights, or streamlined operations.
- **Defining the Scope:** Determine which data and processes will be integrated, and identify any limitations or exclusions.
### 2. Assess System Requirements
Evaluate the system requirements for both Salesforce and Dynamics CRM:
- **Compatibility Checks:** Ensure the versions of Salesforce and Dynamics CRM are compatible for integration.
- **Data Mapping:** Identify the data fields in both systems that need to be synchronized, such as customer information, sales data, and support tickets.
- **API Availability:** Verify the availability and functionality of necessary APIs for integration.
### 3. Choose the Right Integration Tools
Selecting the right tools is vital for successful integration:
- **Middleware Solutions:** Tools like [MuleSoft](https://www.mulesoft.com/), [Boomi](https://boomi.com/), and [Jitterbit](https://www.jitterbit.com/) offer robust integration capabilities with pre-built connectors for both platforms, as for [Shopify Plus to Salesforce](https://www.sfapps.info/shopify-plus-to-salesforce-migration-guide/) integration.
- **Native Integration Options:** Leverage any native integration features provided by Salesforce or Dynamics CRM.
- **Custom Development:** In cases where unique requirements exist, custom integration may be necessary, requiring development expertise.
### 4. Plan and Design the Integration
Careful planning and design are critical:
- **Create a Detailed Integration Plan:** Outline each step, including timelines and resource allocation.
- **Design the Data Flow:** Map out how data will flow between Salesforce and Dynamics CRM, ensuring alignment with business processes.
- **Set Up Data Validation Rules:** Implement rules to ensure data integrity and accuracy during the integration process.
### 5. Implement and Test the Integration
Begin the implementation phase with a thorough testing process:
- **Configure the Integration Tools:** Set up the chosen tools according to the integration plan.
- **Test in a Sandbox Environment:** Conduct thorough testing in a controlled environment to identify and resolve issues before going live.
- **Perform Data Migration:** If necessary, migrate existing data, ensuring consistency and completeness.
### 6. Go Live and Monitor
After successful testing, it’s time to go live:
- **Execute the Go-Live Plan:** Carefully follow the go-live plan to activate the integration in the production environment.
- **Monitor the Integration:** Continuously monitor to ensure it functions as expected, addressing any issues promptly.
- **Gather Feedback:** Collect feedback from end-users to identify areas for improvement.
### 7. Ongoing Maintenance and Optimization
Post-implementation, focus on ongoing maintenance and optimization:
- **Regular Updates:** Keep integration tools and platforms updated to maintain compatibility and security.
- **Performance Optimization:** Continuously seek ways to enhance integration performance.
- **User Training:** Provide ongoing training to help users maximize the benefits of the integrated systems.
Looking for professional help with Salesforce & Dynamics integration for Professional Services?
Get in touch with our parent company!
[Explore More](https://mobilunity.com/tech/hire-salesforce-developers/)
[](https://www.sfapps.info/wp-content/uploads/2024/05/banner-2-icon.svg)
### Latest Trends in Salesforce Dynamics CRM Integration
**Artificial Intelligence and Machine Learning:** AI and ML are transforming CRM by automating routine tasks, enhancing data analysis, and enabling predictive analytics. These technologies help in personalizing customer interactions and improving decision-making processes.
**Enhanced Cybersecurity:** With growing concerns over data breaches, integrating advanced security measures such as end-to-end encryption, multi-factor authentication, and blockchain technology is becoming a standard practice. This ensures that customer data is protected, fostering trust and compliance with global regulations.
**Hyper-Personalization:** Leveraging AI to deliver hyper-personalized experiences is a significant trend. This involves using data to tailor interactions based on individual customer preferences, thereby enhancing customer satisfaction and loyalty.
**Integration with IoT:** The [Internet of Things](https://www.salesforce.com/ap/internet-of-things/) (IoT) is increasingly being integrated into CRM systems, allowing businesses to collect and analyze data from [connected devices](https://www.salesforce.com/products/platform/best-practices/internet-of-things-connects-customers/). This integration provides deeper insights into customer behavior and preferences, improving operational efficiency and customer experience.
**Automation Improvements:** Automation continues to be a core technology in CRM, helping businesses manage large volumes of data and streamline processes. Advanced automation features include automated data entry, AI-powered chatbots, and real-time data analysis.
## Final Thoughts
Integrating Salesforce and Microsoft Dynamics CRM is a strategic decision for professional services firms looking to enhance customer engagement, streamline operations, and drive business growth. The synergy of these two powerful platforms creates a strong foundation for managing client relationships, improving operational efficiency, and gaining valuable insights through advanced analytics.
The Microsoft Dynamics Salesforce integration offers a unified view of customer data by merging sales, service, and marketing information into a single platform. This integration enables firms to better understand their client’s needs, personalize interactions, and deliver superior service. Automating routine tasks and integrating disparate systems reduces manual effort and minimizes errors, allowing staff to focus on higher-value activities and improving overall productivity and efficiency.
Leveraging the advanced analytics and reporting capabilities of both Salesforce and Dynamics CRM, firms can make informed decisions based on real-time data. This data-driven approach helps identify trends, forecast future outcomes, and optimize business strategies. The Dynamics 365 salesforce integration ensures scalability and customization, allowing the integration to grow with the business. Whether a firm is expanding its services, entering new markets, or adapting to client needs, the integration provides the necessary flexibility to support these transitions seamlessly.
The Salesforce Dynamics integration is not merely a technological enhancement but a strategic initiative that can transform how professional services firms operate and serve their clients. By embracing this integration, firms can achieve a competitive edge, build stronger client relationships, and drive sustainable growth in an ever-evolving market.
Adopting the Dynamics CRM integration with Salesforce unlocks new opportunities and efficiencies, driving both immediate and long-term success. Additionally, Microsoft Dynamics CRM salesforce integration ensures that as your business grows, the CRM system scales to meet new challenges and demands. The Salesforce Business Central integration further supports the scalability and adaptability required for dynamic business environments.
The post [Full Guide on Salesforce Dynamics CRM Integration for Professional Services](https://www.sfapps.info/salesforce-dynamics-crm-integration-for-professional-services/) first appeared on [Salesforce Apps](https://www.sfapps.info). | doriansabitov |
1,868,932 | Creating Forms in Ruby on Rails with Simple Form | Ruby on Rails has changed how we build web applications. Early on, the framework came with some great... | 0 | 2024-05-29T11:55:19 | https://blog.appsignal.com/2024/05/15/creating-forms-in-ruby-on-rails-with-simple-form.html | ruby, rails | Ruby on Rails has changed how we build web applications. Early on, the framework came with some great features to help you get started and build robust applications.
However, it can still be tricky to build and handle forms. Simple Form is a great option. Let's examine what Simple Form is, why we might need it, and some real use cases.
## Forms and Ruby on Rails
Ruby on Rails really simplifies building applications. Yet, it also requires your constant attention to keep your codebase streamlined and coherent. One strategy is to avoid writing code by using abstractions, for example. Code that isn't in your actual application is code you don't have to manage. That's why we like abstractions so much, right? Simple Form is just such an abstraction that simplifies building forms for web pages.
Forms are generally complex to build, even using Ruby on Rails. Each field requires attention to define the right type and attach it to the correct attribute. Simple Form eases that pain, as you don't need to find the exact type required for each field.
Let's take a form that gets a user's details, with `name`, `username`, and `email` fields. Consider that we have the `User` model with similar related attributes. With Simple Form, the form that fills in attributes looks like the following:
```erb
<%= simple_form_for @user do |f| %>
<%= f.input :name, required: false %>
<%= f.input :username %>
<%= f.input :email %>
<%= f.button :submit %>
<% end %>
```
Note that we are not specifying the types of each field. Simple Form will pick the correct input type for each field based on the column's type.
This saves us time and keeps the code readable and easier to maintain.
## Installing Simple Form
To install Simple Form in your project, add the `simple_form` gem to the Gemfile. Once you have run `bundle install`, you can use the included generator to set it up correctly for your application.
```sh
rails generate simple_form:install
```
**A note on Bootstrap and Zurb Foundation:** _Both Bootstrap and Zurb Foundation 5 are supported within Simple Form. So, if you are using one of them, you can use the `--bootstrap` or `--foundation` parameter with the generator to include additional configurations during the installation. In the case of Bootstrap, this would look like the following:_
```sh
rails generate simple_form:install --bootstrap
```
## Basic Usage of Simple Form
As we pointed out earlier, Simple Form will generate a complete form mainly based on the data related to your objects. Yet, it's also important to understand that Simple Form will not only generate the appropriate field for each attribute but also provide labels and display error hints above the input fields themselves!
Let's consider a fresh new Ruby on Rails 7.x application. We'll generate a User model and build a form for it using Simple Form.
```sh
rails g model User username:string password:string email:string remember_me:boolean
# and follow with running the migration of course
rails db:migrate
```
Now we can generate a controller with the `new`, `create`, and `index` actions. The aim is to have something we can play with in the form.
```sh
rails g controller users
```
```ruby
# app/users_controller.rb
class UsersController < ApplicationController
def new
render :new, locals: { user: User.new }
end
def create
user = User.create(user_params)
redirect_to users_path
end
def index
render :index, locals: { users: User.all }
end
private
def user_params
params.require(:user).permit(:username, :password, :email)
end
end
```
```ruby
# config/routes.rb
Rails.application.routes.draw do
resources :users, only: [:new, :create, :index]
end
```
These actions require relevant views. Here is a form that handles a user signing up or being added:
```erb
<!-- app/views/users/new.html.erb -->
<%= simple_form_for user do |f| %>
<%= f.input :username, label: 'Your username please', error: 'Username is mandatory, please specify one' %>
<%= f.input :password, hint: 'No special characters.' %>
<%= f.input :email, placeholder: 'user@domain.com' %>
<%= f.input :remember_me, inline_label: 'Yes, remember me' %>
<%= f.button :submit %>
<% end %>
```
In this case, we specify a custom label and error for the `username` input. If no such customization is passed, it will do its best to guess what the label needs to be based on the attribute's name.
We have set a custom hint for the `password` field and a custom `placeholder` for the `email` field. Notice, also, the ability to specify an "inline label" instead of one located above the field (the usual default in forms).
It's also possible to disable labels, hints, and errors by passing the `false` value for those attributes: `<%= f.input :password_confirmation, label: false %>`.
We need to complement this with one more view to make it usable.
```erb
<!-- app/views/users/index.html.erb -->
<ul>
<% users.each do |user| %>
<li><%= user.username %> (<%= user.email %>)</li>
<% end %>
</ul>
<p>
<%= link_to 'New User', new_user_path %>
</p>
```
You can then start the application using `rails s`. Head to `localhost:3000/users/new` to see and use the form.
Let's focus on how the form differs from a classic Ruby on Rails form.
**A note on validations**: _Simple Form can work with mandatory fields out of the box. For example, if we were to add the following presence validation in our User model:_
```ruby
class User < ApplicationRecord
validates :username, presence:true
# more code
end
```
_This will be used by Simple Form to add a little '\*' next to the `username` field, specifying that it's mandatory. This doesn't manage the errors automatically, so you still have to handle that in the controller action. Return to the appropriate field that has errors and act on them._
### About Column Types and Form Fields
As mentioned earlier, we haven't specified a type for each field. The [Simple Form README](https://github.com/heartcombo/simple_form#available-input-types-and-defaults-for-each-column-type) has a complete list of all available input types and the defaults for each column type. Here are the most used ones:
| Column Type | Generated HTML element | Comment |
| -------------------- | ---------------------- | -------------------------------------------- |
| `string` | `input[type=text]` | |
| `boolean` | `input[type=checkbox]` | |
| (passwords) `string` | `input[type=password]` | Any column with a name containing 'password' |
| `text` | `input[type=textarea]` | |
| `integer`, `float` | `input[type=number]` | |
| `datetime` | `datetime select` | |
| `time` | `time select` | |
| `country` | `select` | |
### Booleans
As we can see in the above table, boolean attributes will be represented, by default, as checkboxes. In many cases, that's what we'll want. But if not, there is a way to customize this in Simple Form with the `as` attribute, which allows you to specify if you will show radio buttons or a dropdown instead.
Here is how to specify radio buttons instead:
```erb
<%= f.input :remember_me, input_html: { value: '1' }, as: :radio_buttons %>
```
This will generate the following HTML:
```html
<div class="input radio_buttons optional user_remember_me">
<label class="radio_buttons optional">Remember me</label>
<input type="hidden" name="user[remember_me]" value="" autocomplete="off" />
<span class="radio">
<label for="user_remember_me_true">
<input
value="true"
class="radio_buttons optional"
type="radio"
name="user[remember_me]"
id="user_remember_me_true"
/>Yes
</label>
</span>
<span class="radio">
<label for="user_remember_me_false">
<input
value="false"
class="radio_buttons optional"
readonly="readonly"
type="radio"
name="user[remember_me]"
id="user_remember_me_false"
/>No
</label>
</span>
</div>
```
It's impressive that such a small piece of Ruby code gets you such a complete piece of HTML!
### HTML
From the previous example, we can see how Simple Form generates a lot of HTML for each field. That includes the HTML for the input field and a wrapper div around the label and input field. We can customize those by specifying a custom class and id using the `input_html` and `wrapper_html` attributes. Here is an example.
```erb
<%= simple_form_for @user do |f| %>
<%= f.input :username, wrapper_html: { class: 'username' }, input_html { id: 'username' } %>
<% end %>
```
That's also a way to set attributes for the related HTML, such as `maxlength` or `value`. Let's take the password field from our user form. We can limit the size and length of the field with the `maxlength` attribute.
```erb
<%= f.input :password, input_html: { maxlength: 20 } %>
```
## Custom Inputs and Additional Options
Simple Form is a Ruby library that prepares HTML nodes for you. It comes with a whole set of fields, but you can also add your own. To do so, you can create classes inheriting from Simple Form's classes. Let's consider defining a custom social network input field with a little '@' prefix in front of the actual field.
```ruby
# app/inputs/social_handle_input.rb
class SocialHandleInput < SimpleForm::Inputs::Base
def input(wrapper_options)
merged_input_options = merge_wrapper_options(input_html_options, wrapper_options)
"@ #{@builder.text_field(attribute_name, merged_input_options)}".html_safe
end
end
```
Now we can use it in the following way:
```erb
<%= f.input :network_handle, as: :social_handle %>
```
> Note that, if your User model doesn't have a `network_handle` attribute, you must add it through a migration or cheat with an `attr_accessor` in the model.
## i18n Support
As we've mentioned, building forms is often a pain point. But things get even more painful when it comes to websites and applications that support multiple languages. Simple Form, thankfully, follows Ruby on Rails standards. You can use a `simple_form` key in the local files to define translations for all labels, hints, placeholders, prompts, etc.
Here is a little example:
```ruby
en:
simple_form:
labels:
user:
username: 'User name'
password: 'Password'
hints:
user:
username: 'User name to sign in.'
password: 'No special characters, please.'
placeholders:
user:
username: 'Your username'
password: '****'
include_blanks:
user:
age: 'Rather not say'
prompts:
user:
role: 'Select your role'
```
## Value Objects
Sometimes, we might use custom, non-ActiveRecord classes to instantiate the main object a form relies upon. This can be done to compose data from multiple models into a synthetic one for either business logic reasons or to gather several attributes into a single object.
Whatever the reason, you can still rely on Simple Form to build forms for that kind of object. To do so, the object class needs to implement, in the best case, three methods: `to_model`, `to_key`, and `persisted?`.
The `to_model` method will point to the object itself:
```ruby
def to_model
self
end
```
The `to_key` allows us to point to the identifier attribute for the object — usually, that means an `id` attribute named:
```ruby
def to_key
id
end
```
Finally, the `persisted?` method is there to tell Simple Form if the object is directly persisted or not.
```ruby
def persisted?
false
end
```
If that method isn't present, the `f.submit` helper isn't usable.
There is a faster way, though — including the `ActiveModel::Model` module in the class:
```ruby
# app/models/account.rb
class Account
include ActiveModel::Model
attr_accessor :id, :company_name
end
```
And let's use it through the User model.
```ruby
# app/models/user.rb
class User < ApplicationRecord
attr_accessor :network_handle
def account
@account ||= Account.new(id: 1, company_name: "Acme Inc.")
end
def company_name
account.company_name
end
end
```
We can then add a field for the company name within the user form:
```erb
<%= f.input :company_name %>
```
Of course, this attribute will not get saved in a table. However, the same concept can be applied to compose a value object with attributes pulled from several models.
## Wrapping Up
In this post, we've seen that Simple Form is easy to install and integrate into your Ruby on Rails application. Simple Form not only takes care of the details of each field in your form, but also generates complex and complete HTML so your form can be styled and shaped with ease.
I encourage you to dive into [Simple Form's documentation](https://www.rubydoc.info/gems/simple_form/1.3.1) to see how powerful it can be.
Happy coding!
**P.S. If you'd like to read Ruby Magic posts as soon as they get off the press, [subscribe to our Ruby Magic newsletter and never miss a single post](https://blog.appsignal.com/ruby-magic)!** | riboulet |
1,868,933 | Matrimony Web Design Company | Finding the right partner is a significant step in life, and having a seamless online experience can... | 0 | 2024-05-29T11:54:42 | https://dev.to/pgsoftware/matrimony-web-design-company-54j0 | Finding the right partner is a significant step in life, and having a seamless online experience can make this journey smoother. For matrimonial services looking to create or upgrade their website, choosing the right web design company is crucial. PG Softwares specializes in designing top-notch matrimony websites tailored to your specific needs.
A professional [matrimony web design company](https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html) can help you create a user-friendly platform that attracts and retains users. Key features include intuitive navigation, secure user profiles, effective search filters, and easy-to-use communication tools. These elements ensure that users have a positive experience, encouraging them to stay longer and engage more with the site.
We understand the unique requirements of matrimonial websites. Their team of experts focuses on creating responsive designs that look great on all devices, ensuring accessibility for users on the go. Additionally, they implement robust security measures to protect sensitive user data, which is paramount in the online matchmaking industry.
Investing in a well-designed matrimonial website can set your service apart from competitors, helping to build trust and credibility with your audience. With PG Softwares, you get a partner dedicated to delivering high-quality, customized solutions that meet your business goals and user expectations.
| pgsoftware | |
1,868,931 | Matrimony Website Development Company | Are you seeking a reliable Matrimony Website Development Company to establish your online matrimonial... | 0 | 2024-05-29T11:53:22 | https://dev.to/pgsoftware/matrimony-website-development-company-41fb | Are you seeking a reliable [Matrimony Website Development Company](https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html) to establish your online matrimonial platform? Look no further! Our company specializes in crafting bespoke matrimonial websites tailored to your specific needs. With our expertise in web development, we ensure seamless functionality, user-friendly interfaces, and robust security measures to safeguard user data.
Our team of skilled developers is dedicated to bringing your vision to life, incorporating advanced features such as profile creation, search filters, messaging systems, and payment gateways. Whether you're targeting a niche market or catering to a broad audience, our customized solutions are designed to scale with your growing business.
By choosing our [Matrimony Website Development Company](https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html), you're not just investing in a website; you're investing in a powerful tool to connect individuals seeking lifelong companionship. With our proven track record of success and commitment to excellence, trust us to elevate your matrimonial venture to new heights.
Contact us today to discuss your project requirements and let us transform your ideas into a thriving online platform that fosters meaningful connections.
For More Details:
Visit Now: https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html
| pgsoftware | |
1,868,930 | Matrimony Website Development Company | Are you seeking a reliable Matrimony Website Development Company to establish your online matrimonial... | 0 | 2024-05-29T11:52:52 | https://dev.to/pgsoftware/matrimony-website-development-company-701 | Are you seeking a reliable [Matrimony Website Development Company](https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html) to establish your online matrimonial platform? Look no further! Our company specializes in crafting bespoke matrimonial websites tailored to your specific needs. With our expertise in web development, we ensure seamless functionality, user-friendly interfaces, and robust security measures to safeguard user data.
Our team of skilled developers is dedicated to bringing your vision to life, incorporating advanced features such as profile creation, search filters, messaging systems, and payment gateways. Whether you're targeting a niche market or catering to a broad audience, our customized solutions are designed to scale with your growing business.
By choosing our [Matrimony Website Development Company](https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html), you're not just investing in a website; you're investing in a powerful tool to connect individuals seeking lifelong companionship. With our proven track record of success and commitment to excellence, trust us to elevate your matrimonial venture to new heights.
Contact us today to discuss your project requirements and let us transform your ideas into a thriving online platform that fosters meaningful connections.
For More Details:
Visit Now: https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html
| pgsoftware | |
1,868,929 | Matrimony Website Development Company | Are you seeking a reliable Matrimony Website Development Company to establish your online matrimonial... | 0 | 2024-05-29T11:52:22 | https://dev.to/pgsoftware/matrimony-website-development-company-4jp | Are you seeking a reliable [Matrimony Website Development Company](https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html) to establish your online matrimonial platform? Look no further! Our company specializes in crafting bespoke matrimonial websites tailored to your specific needs. With our expertise in web development, we ensure seamless functionality, user-friendly interfaces, and robust security measures to safeguard user data.
Our team of skilled developers is dedicated to bringing your vision to life, incorporating advanced features such as profile creation, search filters, messaging systems, and payment gateways. Whether you're targeting a niche market or catering to a broad audience, our customized solutions are designed to scale with your growing business.
By choosing our [Matrimony Website Development Company](https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html), you're not just investing in a website; you're investing in a powerful tool to connect individuals seeking lifelong companionship. With our proven track record of success and commitment to excellence, trust us to elevate your matrimonial venture to new heights.
Contact us today to discuss your project requirements and let us transform your ideas into a thriving online platform that fosters meaningful connections.
For More Details:
Visit Now: https://www.pgsoftwares.com/matrimony-website-design-coimbatore.html
| pgsoftware | |
1,868,928 | Why Should Your Business Opt for a White Label NFT Marketplace? | Non-fungible tokens (NFTs) have arisen as a game-changing means to buy, sell, and trade digital... | 0 | 2024-05-29T11:50:13 | https://dev.to/anne69318/why-should-your-business-opt-for-a-white-label-nft-marketplace-2118 | Non-fungible tokens (NFTs) have arisen as a game-changing means to buy, sell, and trade digital assets in today's rapidly growing digital ecosystem. Businesses from many industries are increasingly attempting to capitalize on this trend. One of the most efficient methods to enter the NFT field is via a white-label NFT marketplace. This strategy not only simplifies the entry procedure but also provides several advantages that can have a substantial impact on your company's performance in the NFT industry.

**What is a White Label NFT Marketplace?**
A white-label NFT marketplace is a pre-built platform that businesses can rebrand and personalize to match their personalities. This type of marketplace is pre-loaded with critical features like secure transactions, user-friendly interfaces, and powerful backend support, allowing firms to deploy their NFT platforms quickly and efficiently.
**Benefits of Choosing a White Label NFT Marketplace**
-
**Cost-Effectiveness**
Creating an NFT marketplace from scratch might be prohibitively expensive, necessitating a considerable investment in technology, research, and time. A white-label solution significantly decreases these costs, providing a more cost-effective alternative without sacrificing quality or functionality.
-
**Faster Time to Market**
In the fast-paced world of NFTs, timing is everything. A white-label NFT marketplace enables firms to quickly start their platform, avoiding the lengthy development period. This rapid deployment allows you to capitalise on the NFT trend sooner and obtain a competitive advantage in the market.
-
**Customization and Branding**
Despite being a pre-built system, white-label NFT markets provide substantial customization options. You can customize the platform's style, features, and functionalities to reflect your brand's identity and specific business requirements. This guarantees that your marketplace stands out and appeals to your target demographic.
-
**Scalability**
White label. NFT marketplaces are intended to expand alongside your business. They provide scalable systems that can support an expanding number of users, transactions, and digital assets. This adaptability ensures that your platform can respond to market demands and business growth without requiring major overhauls.
-
**Security and reliability**
Security is crucial in any digital transaction environment, particularly given the high value of NFTs. White label NFT marketplaces include built-in security features like as encryption, secure payment channels, and strong user authentication processes. This level of protection fosters user trust while also protecting your organisation from potential dangers.
-
**Technical Support and Maintenance**
Operating an NFT marketplace necessitates continuous technological support and maintenance. White label solutions frequently offer complete support packages, ensuring that any technical issues are immediately resolved and the platform is kept up-to-date with the most recent features and security fixes.

**Conclusion**
Choosing a [white label NFT marketplace](https://blocktunix.com/metaverse-nft-marketplace-development/) provides a strategic advantage for businesses seeking to enter the NFT field. Its cost-effectiveness, easy implementation, customization choices, scalability, and strong security features make it an appealing alternative for enterprises of all sizes. By utilizing a white-label NFT marketplace, your company may efficiently and successfully enter the lucrative NFT sector, building a strong presence and driving growth in this dynamic digital frontier.
To summarise, a white label NFT marketplace offers a complete, safe, and adaptable solution that enables your company to thrive in the NFT ecosystem, allowing you to meet market demands and expand your brand's digital presence.
| anne69318 | |
1,868,927 | Adding external domains in nextjs project. | Well this is a rather simple post on how to setup external domains and use them in our nextjs... | 0 | 2024-05-29T11:49:47 | https://dev.to/shahbaazx786/adding-external-domains-in-nextjs-project-48p2 | nextjs, nextconfig, react, typescript | Well this is a rather simple post on how to setup external domains and use them in our nextjs projects.
if you have ever came across below error then this post is for you.
```CMD
The "images.domains" configuration is deprecated. Please use "images.remotePatterns" configuration instead.
```
## Old method(Kind of deprecated from Nextjs 14):
```JS
/** @type {import('next').NextConfig} */
const nextConfig = {
images: {
domains: ['images.unsplash.com', 'i.pravatar.cc']
}
};
export default nextConfig;
```
- Upto nextjs 13 we need to just add all the external hostnames in domains array of nextConfig;
- And this will automatically allow the access of specified domains in our project.
## New Method (Applicable from v14):
```JS
/** @type {import('next').NextConfig} */
const nextConfig = {
images: {
remotePatterns: [
{
protocol: 'https',
hostname: 'images.unsplash.com',
pathname: '**'
},
{
protocol: 'https',
hostname: 'i.pravatar.cc',
pathname: '**'
},
]
}
};
export default nextConfig;
```
- The only difference here is we rename the domains to remotePatterns array which accepts an object of varied parameters like protocol, hostname,port, pathname etc.
- And if you are just trying to make it work in your personal project you can just copy the above New method.
- Otherwise you can specify the exact hostname, protocols, port numbers and acceptable pathnames. Ex: images.unsplash.com/**/user12 etc.
| shahbaazx786 |
1,868,926 | Google’s AI Research in Healthcare: Med-PaLM and Beyond | Introduction Artificial Intelligence (AI) has been making significant strides in various sectors,... | 27,548 | 2024-05-29T11:48:48 | https://dev.to/aishikl/googles-ai-research-in-healthcare-med-palm-and-beyond-2ddg | <h2>Introduction</h2>
<p>Artificial Intelligence (AI) has been making significant strides in various sectors, and healthcare is no exception. Google's AI research, particularly in the medical domain, has been groundbreaking. This blog delves into Google's advancements in healthcare AI, focusing on their medical large language model (LLM) research, including Med-PaLM and its successor Med-PaLM 2.</p>
<h2>Med-PaLM: A Revolutionary AI Model</h2>
<h3>The Genesis of Med-PaLM</h3>
<p>Med-PaLM is a large language model (LLM) designed to provide high-quality answers to medical questions. It harnesses the power of Google's large language models, which have been aligned to the medical domain and evaluated using medical exams, medical research, and consumer queries. <a href="https://sites.research.google/med-palm/">Learn more</a>.</p>
<h3>Med-PaLM 2: The Next Iteration</h3>
<p>Med-PaLM 2, introduced at Google Health’s annual event, The Check Up, in March 2023, was the first to reach human expert level on answering USMLE-style questions. According to physicians, the model's long-form answers to consumer medical questions improved substantially. <a href="https://cloud.google.com/blog/topics/healthcare-life-sciences/sharing-google-med-palm-2-medical-large-language-model">Read more</a>.</p>
<h2>AI in Ultrasound Image Interpretation</h2>
<h3>Bridging the Gap in Maternal Care</h3>
<p>In recent years, sensor technology has evolved to make ultrasound devices more affordable and portable. However, they often require experts with years of experience to conduct exams and interpret the images. To help bridge this divide, Google is building AI models that can help simplify acquiring and interpreting ultrasound images to identify important information like gestational age in expecting mothers and early detection of breast cancer. <a href="https://health.google/health-research/">Learn more</a>.</p>
<h3>Partnerships for Real-World Applications</h3>
<p>Google is partnering with Jacaranda Health, a Kenya-based nonprofit focused on improving health outcomes for mothers and babies in government hospitals, to research digital solutions that can help them reach their goal. In Sub-Saharan Africa, maternal mortality remains high, and there is a shortage of workers trained to operate traditional high-cost ultrasound machines. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10301994/">Read more</a>.</p>
<h2>Enhancing Radiotherapy Planning with AI</h2>
<h3>Collaboration with Mayo Clinic</h3>
<p>Over the past three years, Google has partnered with Mayo Clinic to explore how AI can support the tedious, time-consuming process of planning for radiotherapy, a common cancer treatment used to treat more than half of cancers in the U.S. The most labor-intensive step in the planning process is a technique called “contouring”, where clinicians draw lines on CT scans to separate areas of cancer from nearby healthy tissues that can be damaged by radiation during treatment. <a href="https://arxiv.org/html/2401.05654v1">Learn more</a>.</p>
<h3>Future Research and Development</h3>
<p>Google will soon publish research about the findings of their study and the radiotherapy model they developed. As of today, they are formalizing their agreement with Mayo Clinic to explore further research, model development, and commercialization. <a href="https://ai.google/discover/healthai/">Read more</a>.</p>
<h2>AI for Tuberculosis Screening</h2>
<h3>Addressing Global Health Challenges</h3>
<p>Building on years of health AI research, Google is working with partners on the ground to bring the results of their research on tuberculosis (TB) AI-powered chest x-ray screening into the care setting. According to the WHO, TB is the ninth leading cause of death worldwide, with over 25% of TB deaths occurring in Africa. <a href="https://www.linkedin.com/posts/google-health_announcing-our-newest-health-ai-launch-activity-7140719681690099713-fW-E">Learn more</a>.</p>
<h3>Real-World Implementation</h3>
<p>While TB is treatable, it requires cost-effective screening solutions to help catch the disease early and reduce community spread. Google's AI models aim to provide these solutions, making a significant impact on global health. <a href="https://medium.com/meta-multiomics/pushing-the-limits-googles-med-palm-2-is-transforming-ai-in-healthcare-3ab1d0063866">Read more</a>.</p>
<h2>Conclusion</h2>
<p>Google's advancements in AI for healthcare, particularly through their Med-PaLM models, are paving the way for more accurate, efficient, and accessible medical care. From improving maternal care and cancer treatments to addressing global health challenges like tuberculosis, Google's AI research is making a significant impact on the healthcare industry. <a href="https://en.wikipedia.org/wiki/Artificial_intelligence_in_healthcare">Learn more</a>.</p>
<p>Please review the blog and let me know if you approve it. If approved, I will save it as an MD file.</p> | aishikl | |
1,868,925 | Coding vs Programming for Kids: How to Differentiate Between Them in 2024? | As technology changes all the time, words like "coding" and "programming" are often used equally.... | 0 | 2024-05-29T11:48:15 | https://dev.to/owais121/coding-vs-programming-for-kids-jdk | coding, programming, kids, education |
As technology changes all the time, words like "coding" and "programming" are often used equally. This can make parents and teachers confused, especially when they are trying to [teach these ideas to kids](https://khiredkids.com/). The goal of this article is to clear up the difference between coding and programming, especially when it comes to how well they work for young students.
## What is Coding?
Coding refers to the process of writing instructions for a computer to execute. It includes making programs, apps, websites, and other things using certain programming languages. Coding is using what you've learned about code in real life.
### How Coding Works?
Coding is the process of making sure that a computer program and its tools can talk to each other. The program is turned into assembly language by the tools. The assembly language is changed to Binary Coded Signals during the writing process.
Computer systems are electronic devices that use binary coded messages to talk to each other and do their jobs. o's and 1's are the two types of signs used in binary code. Switches and transistors are used to make these messages.
High-level languages and assembly level languages are turned into binary codes during the coding process. This lets the computer hardware and software applications talk to each other.
## What is Programming?
On the other hand, [programming includes](https://dev.to/hamiecod/programming-languages-are-overrated-1aba) a wider range of tasks. It's more than just writing code; it also involves making formulas, figuring out how to solve problems, and planning the structure of a software solution. To be good at programming, you need to know a lot more about computer reasoning and how to solve problems.
### How Programming Works?
As we discuss earlier, [programming](https://dev.to/bilalmohib/programming-advice-4hmh) is the more advanced step after coding, and it works in the following steps:
- Problem Statements
- Software Documentation
- Designing Algorithms and Flowcharts
- Software Development
- Software Testing
- Software Maintenace
## Best Programming Languages for Kids
There are a lot of different computer languages that kids can learn. Here are some of the most popular:
Scratch is a great language for kids because it is based on blocks.
[Python like scratch](https://khiredkids.com/scratch-vs-python/) can be used for a lot of different things, from building websites to analyzing data.
JavaScript is the language of the web and is used to make websites and apps that you can interact with.
Apple's computer language for making apps for iOS and macOS is called Swift.
The best language for your child will rely on how old they are, what they like, and what those goals are. It's important to pick a language that stresses them out without being too hard.
## Why Learn Coding and Programming?
Technology has changed the world in many ways lately. It's true that things have changed and grown faster than you think. All these changes are due to software growth. Code and programming come to mind when you think about all that software.
Everything was getting better with the help of code and programming. Being able to code and program is making a big difference in people's careers. They showed that everything is easy for them to change. Not only do these help with software, but they also help with everyday life and show great results in every area.
## Coding Vs Programming: Differentiating Factors
### Scope of Activity
When you code, you usually focus on putting certain jobs or functions into a bigger program. Programming includes more than just writing code. It also includes planning, designing, developing, and testing software solutions.
### Level of Complexity
Coding tasks can be easy or hard, but programming usually includes more complicated thinking about algorithms and how to solve problems. To be good at programming, you need to know more about things like data structures, algorithms, and computer logic.
### End Goal
Coding is all about turning written directions into a language that computers can understand and follow. The goal of programming is to make complete software solutions that meet specific wants or problems.
### Platforms and Tools
When you write and test code, you usually do it in an [integrated development environment](https://en.wikipedia.org/wiki/Integrated_development_environment) (IDE) or a text editor. To handle bigger projects well, programmers may need extra tools like debugging software, version control systems, and collaboration platforms and also get help and knowledge from the best platforms such as [KhiredKids](https://khiredkids.com/) that develop your problem-solving skills.
## Benefits of Learning Coding and Programming
There are many good things about learning how to code and program. Look at some now.
You can improve your skills that help you come up with new and creative ideas by learning how to code and program.
It's easy to get a job in IT once you know how to code and program.
Coding and [programming also help](https://dev.to/mrdanishsaleem/programming-is-not-just-code-its-so-much-more-4aj4) you get better at making logic, which is very helpful for getting a good job.
Learning how to code and program can help you make new and useful software that will make people's lives easier in every way.
### Final Thoughts
In conclusion, knowing the different meanings behind coding and programming is essential to clear your vision and [expert in coding](https://khiredkids.com/explore-all-courses/). Parents and teachers can make smart choices about their kids' tech schooling if they understand the differences between these terms.
By creating a space that encourages both coding and programming, we give young people the tools they need to be creative and confident in the digital world.
There are also some things to keep in mind that differentiate coding from programming such as scope of activity, level of complexity, end goals, and more. | owais121 |
1,868,923 | Navigating the Digital Marketing Landscape in Kochi,Trivandrum A Comprehensive Guide | Kochi, often referred to as the commercial capital of Kerala, has seen a significant surge in digital... | 0 | 2024-05-29T11:43:27 | https://dev.to/harish_kumar_/navigating-the-digital-marketing-landscape-inkochitrivandrum-a-comprehensive-guide-3o7p | webdev, programming, react, python | Kochi, often referred to as the commercial capital of Kerala, has seen a significant surge in digital marketing activities. As businesses in Kochi increasingly recognize the power of online presence, the demand for effective digital marketing strategies has never been higher. Whether you are a local business owner, an aspiring digital marketer, or an established company looking to enhance your online footprint, understanding the landscape of digital marketing in Kochi is crucial.we are one of the best [digital marketing agency in kochi, kerala since 2016](https://www.cgitts.com/
)
Kochi is rapidly becoming a hotspot for digital marketing innovation.
Whether you're a business looking to enhance your online presence or an
individual aiming to learn the ropes of digital marketing, Kochi offers a wealth of opportunities. By partnering with the right agency or enrolling in a top training course, you can harness the power of digital marketing to
achieve your goals. Stay ahead of the competition and make your mark in
the digital world with the best digital marketing services in Kochi.CGITTS, incorporated since 2016 at Trivandrum, Kerala as a technology based training and consultancy service provider registered under MSME, Govt. of India UAN -KL12D0005156 and is initiated and approved by CHRD (since
2007 registered under the State Govt. of Kerala). As part of our expansion plans and to offer unconditional, un parallelled and outstanding digital marketing services to esteemed clients in Dubai, United Arab Emirates,
CGITTS have extended its wings to Dubai under the title M/S Wahat Al
Muna Digital Marketing an initiative of M/S Filo Computers, Al Ain (since 2010) with a sole intend of being the best digital marketing company in
Dubai, U.A.E. CGITTS is governed and guided by a panel of core dedicated professionals with rich expertise in the training and consultancy sector. We are a team of dedicated and passionate SEO Experts, Social Media Add
Experts, Google & Hubspot Certified Experts, Designers, Programmers and
Developers. Though our core team is focusing on Digital Marketing
Projects, we design and develop websites, apps and customized softwares
based on client requirements and finaly this is what make us known as oneamong the [best digital marketing company in Trivandrum](https://www.cgitts.com/
), Kerala and pan
India since 2016.
| harish_kumar_ |
1,392,690 | test | def foo end Enter fullscreen mode Exit fullscreen mode | 22,523 | 2024-05-29T11:39:21 | https://dev.to/chloerei/test-56ab | ```ruby
def foo
end
```
| chloerei | |
1,867,659 | LLM Fine-tunig on RTX 4090: 90% Performance at 55% Power | At just a fraction of power, 4090 is capable of delivering almost full performance. While running... | 0 | 2024-05-29T11:39:15 | https://dev.to/maximsaplin/llm-fine-tunig-on-rtx-4090-90-performance-at-55-power-29o3 | ai, machinelearning, hardware, perfrormance | At just a fraction of power, 4090 is capable of delivering almost full performance.
While [running SFT](https://github.com/maxim-saplin/finetuning/blob/4cc21f209e1448ac37ca1ff6709ad7045f3b1288/qlora.py) (supervised fine-tuning) via Hugginface's TRL library (using Torch as a backend) I decided to move Afterburner power slider down:

And checked **wandb** dashboards for changes in training speed (epochs per hour) and GPU power:


Here's the full table with performance (and a few other measurements) at different power levels:
| Power, W | Temp, °C | Afterburner PWR % | Perf (Epoch/Hour) | Perf/kW | Power % | Perf % |
| -------- | -------- | ----------------- | ---------------- | ------- | ------- | ------- |
| 390 | 72 | 100% | 0,442 | 1,134 | 100,0% | 100,0% |
| 330 | 70 | 80% | 0,436 | 1,322 | 84,6% | 98,6% |
| 300 | 67 | 70% | 0,413 | 1,378 | 76,9% | 93,4% |
| 260 | 62 | 60% | 0,405 | 1,557 | 66,7% | 91,5% |
| 240 | 60 | 55% | 0,394 | 1,644 | 61,5% | 89,2% |
| 220 | 58 | 50% | 0,365 | 1,660 | 56,4% | 82,6% |
| 180 | 52 | 40% | 0,271 | 1,508 | 46,2% | 61,3% |
| 150 | 47 | 33% | 0,221 | 1,473 | 38,5% | 49,9% |
> If you run long training sessions on your RTX 4090 PC and would like to save on electricity bills OR keep your room cooler (500W midi tower is quite a heater), limiting GPU power to 50-60% makes total sense.
Besides there's a sweet spot at 50% (220W) with maximum efficiency (performance-per-watt or trained-epochs-per-watt*hour). At this power level, you still get 82% of the max speed.
## Few Notes on RTX 4090 Power Levels
Most desktop RTX 4090 cards are rated at 450W, such as mine (Palit 4090 flashed with Asus 450W 1.1v BIOS). There're versions with 500W, 600W and even [666W](https://www.galax.com/en/graphics-card/hof/geforce-rtx-4090-hof.html) power limits.
I could see 450W power consumption in the OCCT synthetic benchmark. In 3D Mark TimeSpy max power observed was around 430W.
While running the above training (full fine-tuning) the max reported power was 390W - TRL/Torch was not able to fully utilize the GPU (actual utilization being at around 90%). This can be explained by not filling the entirety of VRAM (~20GB out of 24GB). And it could be fixed by increasing the batch size training param (and risking VRAM overflow into shared memory significantly slowing down the total training time). On some other occasions, I could see 410-420W from TRL running LORA fine-tuning.
Based on actual GPU power reported it seems that Afterburner power limits were calculated assuming 100% is 440W.
## Gaming Performance Follows Suit
The diminishing performance returns of 4090 have been evaluated before. E.g. in this [reddit post](https://www.reddit.com/r/nvidia/comments/ybl2tb/rtx_4090_performance_per_watt_graph/) a user shared 3DMark FireStrike scores from RTX 4090. The outcomes are the same, you get 80% performance at a 50% power limit.
 | maximsaplin |
1,429,986 | test | A post by Rei | 22,523 | 2024-05-29T11:38:59 | https://dev.to/chloerei/test-3oc7 | chloerei | ||
1,868,921 | Wound Care Market : Top Factors That Are Leading The Demand Around The Global | Fairfield Market Research, a leading provider of industry insights, has released a comprehensive... | 0 | 2024-05-29T11:37:37 | https://dev.to/n_patil_96f2372543795ac55/wound-care-market-top-factors-that-are-leading-the-demand-around-the-global-d05 | Fairfield Market Research, a leading provider of industry insights, has released a comprehensive report on the global wound care market, highlighting key trends, growth determinants, major findings, and future prospects. The report forecasts significant expansion in the market, with a projected valuation of US$28.8 billion by 2030, marking a notable jump from US$21.2 billion in 2022.
Visit our Research Report:https://www.fairfieldmarketresearch.com/report/wound-care-market

Major Findings:
Prevalence of Chronic Wounds Driving Market Growth: The report identifies the increasing prevalence of chronic wounds, such as diabetic foot ulcers, pressure ulcers, and venous leg ulcers, as a key driver for the wound care market. Factors like ageing populations, rising rates of diabetes, and lifestyle-related conditions contribute to the higher incidence of chronic wounds.
Rise in Surgical Procedures Fueling Demand: Another significant trend driving market growth is the increasing number of surgical procedures, including elective and trauma-related surgeries. This necessitates effective wound management and accelerates the demand for wound care products and solutions.
Dominance of Advanced Wound Care Products: In 2022, the advanced wound care products category dominated the industry. These products, including dressings, therapies, and devices, are designed to promote faster wound healing, reduce infection rates, and provide optimal wound management.
Surgical and Traumatic Wounds Segment Leading Market Share: The surgical and traumatic wounds segment is anticipated to dominate the market share globally. This segment has historically been a significant focus due to the prevalence of surgical procedures and traumatic injuries, necessitating effective wound care products and solutions.
Hospitals and Clinics Control Market Share: Hospitals and clinics category controlled the market in 2022, handling a large volume of patients with various wound types and requiring a wide array of wound care products and solutions for effective wound management.
Significant Presence of Traditional Wound Care Products: The traditional wound care category remains highly prevalent in the market, offering affordable and cost-effective options, particularly in regions with constrained healthcare budgets or limited access to advanced medical facilities.
North America Leading Regional Market: North America is anticipated to account for the largest share of the global wound care market, driven by technological advancements and the prevalence of traumatic wounds.
Rapid Growth Expected in Asia Pacific: The Asia Pacific region is witnessing significant growth due to rapid economic development, technological advancements, and an ageing population leading to an increased incidence of chronic conditions.
Expert Analysis:
Fairfield Market Research's analysis underscores the increasing significance of advanced wound care products across various industries. The report highlights the pivotal role of hospitals and clinics in propelling market expansion and the growing demand for effective wound care solutions due to the burgeoning incidence of chronic wounds.
Key Growth Determinants:
The report identifies the increasing significance of advanced wound care across non-healthcare areas and the surge in demand at hospitals and clinics as primary growth determinants. Additionally, the rising incidence of chronic wounds presents significant opportunities for market expansion.
Major Growth Barriers:
High costs of advanced wound care products and concerns about adverse effects and allergies emerge as significant barriers to market growth, particularly in regions with constrained healthcare budgets.
Key Trends and Opportunities:
Growing demand for advanced wound care products, a focus on patient-centric wound care, and the rise in chronic wounds and lifestyle diseases represent key trends and opportunities shaping the future of the wound care market.
Regulatory Scenario:
The regulatory environment significantly shapes the wound care industry, impacting product development, market access, quality standards, and overall competitiveness. Compliance with stringent regulatory guidelines is crucial for product approval and market acceptance.
Regional Frontrunners:
North America continues to secure the leading position in the global wound care market, while Asia Pacific presents significant growth prospects through 2030.
Competitive Landscape Analysis:
The global wound care market is consolidated, with major players such as Smith & Nephew, 3M, Johnson & Johnson, and Coloplast leading the industry. The report highlights new product launches and distribution agreements as significant company developments driving market growth.
Conclusion:
Fairfield Market Research's report provides comprehensive insights into the global wound care market, highlighting key trends, growth determinants, and future prospects. With a projected valuation of US$28.8 billion by 2030, the market is poised for steady growth, driven by factors such as the prevalence of chronic wounds, increasing surgical procedures, and technological advancements in wound care products and solutions. | n_patil_96f2372543795ac55 | |
1,868,920 | Google’s AI Research in Healthcare: Med-PaLM and Beyond | Introduction Artificial Intelligence (AI) has been making significant strides in various... | 27,548 | 2024-05-29T11:36:47 | https://dev.to/aishikl/googles-ai-research-in-healthcare-med-palm-and-beyond-3hl8 | ## Introduction
Artificial Intelligence (AI) has been making significant strides in various sectors, and healthcare is no exception. Google's AI research, particularly in the medical domain, has been groundbreaking. This blog delves into Google's advancements in healthcare AI, focusing on their medical large language model (LLM) research, including Med-PaLM and its successor Med-PaLM 2.
## Med-PaLM: A Revolutionary AI Model
### The Genesis of Med-PaLM
Med-PaLM is a large language model (LLM) designed to provide high-quality answers to medical questions. It harnesses the power of Google's large language models, which have been aligned to the medical domain and evaluated using medical exams, medical research, and consumer queries. Learn more.
### Med-PaLM 2: The Next Iteration
Med-PaLM 2, introduced at Google Health’s annual event, The Check Up, in March 2023, was the first to reach human expert level on answering USMLE-style questions. According to physicians, the model's long-form answers to consumer medical questions improved substantially. Read more.
## AI in Ultrasound Image Interpretation
### Bridging the Gap in Maternal Care
In recent years, sensor technology has evolved to make ultrasound devices more affordable and portable. However, they often require experts with years of experience to conduct exams and interpret the images. To help bridge this divide, Google is building AI models that can help simplify acquiring and interpreting ultrasound images to identify important information like gestational age in expecting mothers and early detection of breast cancer. Learn more.
### Partnerships for Real-World Applications
Google is partnering with Jacaranda Health, a Kenya-based nonprofit focused on improving health outcomes for mothers and babies in government hospitals, to research digital solutions that can help them reach their goal. In Sub-Saharan Africa, maternal mortality remains high, and there is a shortage of workers trained to operate traditional high-cost ultrasound machines. Read more.
## Enhancing Radiotherapy Planning with AI
### Collaboration with Mayo Clinic
Over the past three years, Google has partnered with Mayo Clinic to explore how AI can support the tedious, time-consuming process of planning for radiotherapy, a common cancer treatment used to treat more than half of cancers in the U.S. The most labor-intensive step in the planning process is a technique called “contouring”, where clinicians draw lines on CT scans to separate areas of cancer from nearby healthy tissues that can be damaged by radiation during treatment. Learn more.
### Future Research and Development
Google will soon publish research about the findings of their study and the radiotherapy model they developed. As of today, they are formalizing their agreement with Mayo Clinic to explore further research, model development, and commercialization. Read more.
## AI for Tuberculosis Screening
### Addressing Global Health Challenges
Building on years of health AI research, Google is working with partners on the ground to bring the results of their research on tuberculosis (TB) AI-powered chest x-ray screening into the care setting. According to the WHO, TB is the ninth leading cause of death worldwide, with over 25% of TB deaths occurring in Africa. Learn more.
### Real-World Implementation
While TB is treatable, it requires cost-effective screening solutions to help catch the disease early and reduce community spread. Google's AI models aim to provide these solutions, making a significant impact on global health. Read more.
## Conclusion
Google's advancements in AI for healthcare, particularly through their Med-PaLM models, are paving the way for more accurate, efficient, and accessible medical care. From improving maternal care and cancer treatments to addressing global health challenges like tuberculosis, Google's AI research is making a significant impact on the healthcare industry. Learn more.
Please review the blog and let me know if you approve it. If approved, I will save it as an MD file. | aishikl | |
1,868,919 | Buy Brooksfield Narrow Stripe Blazer | At peter shearer menswear elevate your wardrobe with the Brooksfield Narrow Stripe Blazer, a... | 0 | 2024-05-29T11:35:43 | https://dev.to/oliver_james_763b3c231a62/buy-brooksfield-narrow-stripe-blazer-13oh | At peter shearer menswear elevate your wardrobe with the Brooksfield Narrow Stripe Blazer, a sophisticated addition to any collection. Crafted with precision, this blazer features subtle narrow stripes that exude elegance and versatility. Perfect for both formal and casual occasions, it offers a tailored fit and premium comfort. The Brooksfield Narrow Stripe Blazer effortlessly combines classic style with modern appeal, making it a must-have for discerning fashion enthusiasts.
DETAILS & CARE
73% Polyester 23% Rayon 4% Spandex
Rear side vents
Fully lined with accent trimming
3 internal jetted pockets
Dry clean only
Product Code BFU970
Made in China
Price Rating: $249 (AUD)
For Shopping Click here: https://petershearer.com.au/collections/sportscoats/products/brooksfield-narrow-stripe-blazer-brf-bfu970?variant=41629313957947
 | oliver_james_763b3c231a62 | |
1,866,918 | Boletín AWS Open Source, May Edition | Bienvenidos a una nueva edición del boletín de AWS Open Source! En estas fechas ya tenemos los AWS... | 27,536 | 2024-05-29T11:30:00 | https://dev.to/aws-espanol/boletin-aws-open-source-may-edition-1bng | opensource, aws | Bienvenidos a una nueva edición del boletín de AWS Open Source!
En estas fechas ya tenemos los AWS Summits a la vuelta de la esquina. Si nunca has asistido a uno de nuestros eventos, te lo recomendamos fervientemente. Encontrarás muchas demos (sobre todo en el córner de Innovación), charlas techies de la Comunidad AWS (Community Lounge) y más de 100 charlas de distintos segmentos.
Si aún no te has inscrito para el Summit de Madrid el 5 de Junio, ¡aquí te dejamos el enlace para que no te quedes fuera!: https://go.aws/4aNDAtY
¡Y eso no es todo! Si te quedas con ganas de más, el 6 de Junio tendremos talleres prácticos en nuestras oficinas. Podrás elegir entre GenAI, Containers, Serverless, Analytics y mucho más!
Pero vayamos con el contenido de este mes. En esta edición exploraremos una amplia gama de herramientas y tecnologías, desde proyectos open-source como DeepRacerOnSpot hasta la implementación de servicios web en Rust con integraciones de AWS. También aprenderemos a cómo simplificar despliegues en Amazon EKS con GitHub Actions y AWS CodeBuild, y a mejorar la resiliencia de la red con Istio. Veremos cómo habilitar el acceso privado al API de Kubernetes en Amazon EKS y nos sumergimos en el mundo de la inteligencia artificial generativa con proyectos como CloudConsoleCartographer y agent-evaluation.
Esperamos que disfrutéis la recopilación:
## Tools
**[DeepRacerOnSpot](https://github.com/aws-deepracer-community/deepracer-on-the-spot)**
DeepRacerOnSpot en un proyecto open-source de nuestros amigos de JP Morgan. Se trata de un conjunto de plantillas de CloudFormation que simplifica la creación de instancias EC2 para el aprendizaje de DeepRacer, con inicio/fin de entrenamiento automatizado y hasta 10 veces de ahorro. Facilita el uso de las herramientas del repositorio deepracer-for-cloud, permitiendo un entrenamiento más rápido y económico en la Consola de AWS.

**[AWS Advanced Python Wrapper driver](https://github.com/aws/aws-advanced-python-wrapper)**
Aunque Aurora ofrece características para obtener máximo rendimiento y disponibilidad, como conmutación por error, la mayoría de los controladores actuales no las admiten por completo. El propósito del AWS Advanced Python Driver es añadir una capa de software sobre los controladores Python existentes para aprovechar todas las mejoras de Aurora sin alterar el flujo de trabajo actual con las bases de datos y los controladores Python existentes.
**[serverless-lambda-cron-cdk](https://aws-oss.beachgeek.co.uk/3uf)**
serverless-lambda-cron-cdk es un repositorio que ofrece un kit de inicio para configurar cron jobs utilizando AWS Lambda. Incluye el código de implementación necesario de AWS Cloud Development Kit (CDK), un pipeline de CI/CD, así como el código fuente de la función Lambda. El kit está diseñado para ser fácilmente configurable y desplegable, permitiendo una configuración rápida e iterativa. Es ideal para desarrolladores que buscan automatizar tareas en un horario utilizando AWS Lambda.
**[CloudConsoleCartographer](https://aws-oss.beachgeek.co.uk/3uk)**
CloudConsoleCartographer es un proyecto lanzado en Black Hat Asia el 18 de abril de 2024. Se trata de un framework para consolidar grupos de eventos en la nube (por ejemplo, registros de CloudTrail) y mapearlos a las acciones originales del usuario para su posterior análisis. Esto te permite detectar señales entre todo el ruido que pueden generar las alertas de manera más eficiente. Si deseas obtener más información, te recomiendo la publicación de Daniel Bohannon, ["Introducing Cloud Console Cartographer: An Open-Source Tool To Help Security Teams Easily Understand Log Events Generated by AWS Console Activity](https://aws-oss.beachgeek.co.uk/3ul)".

**[e1s](https://aws-oss.beachgeek.co.uk/3w1)**
e1s es terminal creado por Xing Yahao para navegar y gestionar los recursos de AWS ECS, compatible tanto con Fargate como EC2 ECS. Inspirado en k9s, e1s utiliza la configuración predeterminada de aws-cli. No almacena ni envía tu clave de acceso a ningún lugar. La clave de acceso se utiliza solo para conectarse de forma segura a la API de AWS a través del AWS SDK. e1s está disponible en plataformas Linux, macOS y Windows.

**[amazon-bedrock-client-for-mac](https://aws-oss.beachgeek.co.uk/3um)**
amazon-bedrock-client-for-mac es un repositorio que contiene el código del Cliente Amazon Bedrock para Mac, una aplicación de demostración para macOS desarrollada con SwiftUI. Esta aplicación actúa como interfaz de cliente para AWS Bedrock, permitiendo a los usuarios interactuar con los modelos de AWS Bedrock.

**[agent-evaluation](https://aws-oss.beachgeek.co.uk/3vw)**
agent-evaluation es un framework impulsado por inteligencia artificial generativa para probar agentes virtuales. Agent Evaluation implementa un agente LLM (evaluador) que orquestará conversaciones con tu propio agente (target) y evaluará las respuestas durante la conversación. El repositorio tiene enlaces a documentos detallados que proporcionan configuraciones de ejemplo y una guía de referencia para que puedas comenzar. ¡Es una lectura práctica que no querrás perderte!
**[container-resiliency](https://aws-oss.beachgeek.co.uk/3vs)**
container-resiliency tiene como objetivo proporcionar una guía completa y patrones para que las organizaciones diseñen, implementen y operen aplicaciones contenerizadas resilientes y tolerantes a fallos en AWS. Estos patrones tienen como objetivo proporcionar la orientación necesaria para mitigar riesgos, minimizar el tiempo de inactividad y garantizar la disponibilidad continua y la resiliencia de las aplicaciones contenerizadas en AWS, mejorando en última instancia su eficiencia operativa y la experiencia del cliente.

## Demos, Soluciones y Workshops
**[bedrock-access-gateway](https://aws-oss.beachgeek.co.uk/3us)**
bedrock-access-gateway proporciona APIs RESTful compatibles con OpenAI para Amazon Bedrock. Amazon Bedrock ofrece una amplia gama de modelos fundacionales (como Claude 3 Opus/Sonnet/Haiku, Llama 2/3, Mistral/Mixtral, etc.) y un conjunto amplio de capacidades para construir aplicaciones de inteligencia artificial generativa. A veces, es posible que tengas aplicaciones desarrolladas utilizando APIs o SDKs de OpenAI y desees experimentar con Amazon Bedrock sin modificar tu código. O simplemente quieras evaluar las capacidades de estos modelos base en herramientas como AutoGen, etc. Bueno, este repositorio te permite acceder a los modelos de Amazon Bedrock de forma transparente a través de APIs y SDKs de OpenAI, lo que te permite probar estos modelos sin necesidad de realizar cambios en el código.
**[svdxt-sagemaker-huggingface](https://aws-oss.beachgeek.co.uk/3uo)**
svdxt-sagemaker-huggingface es el último repositorio de Gary Stafford, que muestra cosas bastante interesantes sobre las que Gary ha estado escribiendo en el espacio de la inteligencia artificial generativa. Esta vez, Gary le echa un vistazo al campo emergente de la generación de videos a través de Stable Video Diffusion XT (SVT-XT) de Stability AI. Este modelo base es un modelo de difusión que toma una imagen fija como marco de condicionamiento y genera un video a partir de ella. El repositorio proporciona todo lo necesario para comenzar y muestra algunos de los videos que Gary ha creado, ¡una pasada!
**[amazon-bedrock-audio-summarizer](https://aws-oss.beachgeek.co.uk/3w2)**
amazon-bedrock-audio-summarizer proporciona una forma automatizada de transcribir y resumir ficheros de audio utilizando AWS. Utiliza Amazon S3, AWS Lambda, Amazon Transcribe y Amazon Bedrock (con Claude 3 Sonnet) para crear transcripciones de texto y resúmenes a partir de grabaciones de audio que hayamos cargado. Este proyecto apareció en la publicación en el blog del Dr. Werner Vogels, [Hackeando nuestro camino hacia mejores reuniones de equipo](https://aws-oss.beachgeek.co.uk/3w3).

**[generative-ai-newsletter-app](https://aws-oss.beachgeek.co.uk/3vx)**
generative-ai-newsletter-app es una solución serverless lista para permitir crear boletines informativos de forma automática con resúmenes generados por inteligencia artificial. La aplicación ofrece a los usuarios la capacidad de influir en las sugerencias de la IA generativa para personalizar cómo se resumen los contenidos, el tono con el que escribe, el público objetivo y más. Los usuarios pueden dar estilo al boletín de noticias HTML, definir con qué frecuencia se crean los boletines y compartirlos con otros.

### Cloud Native
**[Scale AI training and inference for drug discovery through Amazon EKS and Karpenter](https://aws-oss.beachgeek.co.uk/3tz)** ofrece un caso de estudio de Iambic Therapeutics que muestra cómo utilizan Karpenter en Amazon Elastic Kubernetes Service (Amazon EKS) para escalar el entrenamiento e inferencia de inteligencia artificial. [Hands-on]

**[Open source observability for AWS Inferentia nodes within Amazon EKS clusters](https://aws-oss.beachgeek.co.uk/3u0)** te guía a través del patrón de Observabilidad open-source para AWS Inferentia. Podrás ver cómo monitorear el rendimiento de los chips de aprendizaje automático, utilizados en un clúster de Amazon Elastic Kubernetes Service (Amazon EKS), con nodos de plano de datos basados en instancias de Amazon Elastic Compute Cloud (Amazon EC2) del tipo Inf1 e Inf2. [Hands-on]

**[Using OPA to validate Amazon EKS Blueprint Templates](https://aws-oss.beachgeek.co.uk/3u2)** explora los beneficios de utilizar OPA para escanear tus templates de Amazon EKS para Terraform, y cómo puede ayudarte a mantener un entorno seguro [Hands-on]
**[Autoscaling Kubernetes workloads with KEDA using Amazon Managed Service for Prometheus metrics](https://aws-oss.beachgeek.co.uk/3u3)** ofrece una guía práctica que te muestra cómo escalar automáticamente una aplicación en Amazon EKS utilizando KEDA y Amazon Managed Service for Prometheus. [Hands-on]

**[Amazon OpenSearch Service Under the Hood: OpenSearch Optimized Instances (OR1)](https://aws-oss.beachgeek.co.uk/3ty)** profundiza en la familia de instancias optimizadas para OpenSearch (OR1) y cómo pueden proporcionar un alto rendimiento de indexación y durabilidad utilizando un nuevo protocolo de replicación física. Además, explora algunos de los desafíos para mantener la corrección y la integridad de los datos. [Hands-on]

**[Simplify Amazon EKS Deployments with GitHub Actions and AWS CodeBuild](https://aws-oss.beachgeek.co.uk/3vd)** ofrece una visión de cómo simplificar los despliegues de Amazon EKS con GitHub Actions y AWS CodeBuild.[Hands-on]

**[Enhancing Network Resilience with Istio on Amazon EKS](https://aws-oss.beachgeek.co.uk/3ve)** explora cómo Istio en EKS puede mejorar la resiliencia de la red para tus microservicios. Proporciona características como tiempos de espera, reintentos, cortafuegos, límites de velocidad e inyección de fallos. Istio permite a los microservicios mantener una comunicación receptiva incluso cuando enfrentan interrupciones. [Hands-on]

**[Enable Private Access to the Amazon EKS Kubernetes API with AWS PrivateLink](https://aws-oss.beachgeek.co.uk/3vf)** ofrece un plan para aprovechar AWS PrivateLink y habilitar acceso privado, inter-VPC entre cuentas al API de Kubernetes de Amazon EKS. [Hands-on]

## AWS Community
###Cloud Native
**Sean Kane** de SuperOrbital nos trae **[Profundizando en la autenticación de AWS y Kubernetes](https://aws-oss.beachgeek.co.uk/3vm)**, donde nos ofrece un enfoque práctico para explorar las diversas opciones que los usuarios de AWS pueden usar para permitir acceso controlado entre los recursos en la nube de AWS y los recursos del clúster de Kubernetes de EKS.
No nos vamos a salir del campo de la seguridad, ya que **Paul Schwarzenberger**, nos trae [AWS Application Load Balancer mTLS con CA](https://aws-oss.beachgeek.co.uk/3vq), donde nos muestra cómo implementar mTLS para AWS Application Load Balancer utilizando una autoridad de certificación de código abierto.
####Cursos gratuitos
AWS ha lanzado una serie de cursos gratuitos que te permitirán iniciar tu trayecto en el mundo de DevOps. Además, también encontrarás opciones disponibles para Machine Learning en el mismo portal.
1. [Getting Started with DevOps on AWS ](https://classcentral.com/course/getting-started-with-devops-on-aws-72991)
2. [DevOps Engineer Learning Plan](https://t.co/zOzISMDJFi)
###IA Generativa
**Abishek Gupta** ha creado dos publicaciones de obligada lectura en este área. En primer lugar, **[Bases de datos vectoriales para aplicaciones de inteligencia artificial generativa](https://aws-oss.beachgeek.co.uk/3u6)**, donde comparte cómo superar limitaciones utilizando bases de datos vectoriales y RAG. En la siguiente publicación, **[Cómo usar la Generación Aumentada por Recuperación (RAG) para aplicaciones en Go](https://aws-oss.beachgeek.co.uk/3u7)**, Abi nos muestra cómo implementar RAG con LangChain y PostgreSQL usando Go.
**João Galego** explora dos proyectos de código abierto (RAGmap y RAGxplorer) que ayudan a explorar los embeddings que creamos. Lo tenéis disponible en su publicación **[Mapeando embeddings: de significado a vectores y viceversa](https://aws-oss.beachgeek.co.uk/3u8)**.
Seguimos con un fanático de WordPress, **Rio Astamal**, quien comparte cómo puedes usar la IA generativa para construir un plugin de WordPress que se integra con Amazon Bedrock, en su blog **[Creé un plugin de inteligencia artificial para WordPress para que los creadores sean más productivos. Así es cómo lo hice.](https://aws-oss.beachgeek.co.uk/3u9)**
En esta edición os queremos presentar el repositorio de **Matt Camp** donde habla DeepRacer, y comparte noticias sobre lo que puedes esperar con la última actualización del repositorio en la publicación [DeepRacer-for-Cloud v5.2.2 ahora disponible con nuevas métricas de entrenamiento en tiempo real](https://dev.to/aws-builders/deepracer-for-cloud-v522-now-available-with-new-real-time-training-metrics-7ki)!
Por último, si sois usuarios de Hugging Face ahora tenéis disponible AWS Inferentia2. Podéis implementar más de 100,000 modelos en AWS Inferentia2 usando Amazon SageMaker, incluidas arquitecturas populares como BERT y RoBERTa. Los endpoints de inferencia de Hugging Face os facilitarán la implementación. Simplemente eliges tu modelo y seleccionas la opción de instancia Inf2. Para que os hagáis una idea: podemos implementar modelos Llama 3 en:
- inf2-small: 2 núcleos, 32 GB de memoria (0.75/hora)
- inf2-xlarge: 24 núcleos, 384 GB de memoria (12/hora)
Los endpoints admiten el agrupamiento continuo, el streaming y son compatibles con la API de mensajes del SDK de OpenAI, lo que garantiza una transición fluida para tus aplicaciones GenAI.
## Vídeos del Mes
###Keynote Fifteen years of formal methods at AWS
Grabado en el Open Source Summit North America, Marc Brooker de AWS, nos ofrece una sesión que examina el pasado, el presente y el futuro del diseño de sistemas a gran escala, poniendo las matemáticas en el centro.
{% embed https://youtu.be/HxP4wi4DhA0 %}
###Talking with Management About Open Source
Es una guía sobre cómo puedes hablar con los managers y stakeholders sobre open source. Una charla interesante que muestra los conocimientos esenciales para saber interactuar con los de arriba.
{% embed https://youtu.be/p_9WPXNXqPk %}
## El Mundo de Rust
Este mes queremos destacar a **Joshua Mo** que nos trae una serie de artículos que nos ha llamado la atención:
En el primero de ellos Josh nos muestra [cómo utilizar AWS Bedrock con Rust](https://www.shuttle.rs/blog/2024/05/10/prompting-aws-bedrock-rust), donde despliega una API que puede recibir un prompt JSON de una solicitud HTTP y devolver una respuesta de AWS Bedrock que se puede transmitir o devolver como una respuesta completa.
Además nos trae [cómo crear un servicio web que utilice AWS S3 para almacenar y recuperar imágenes](https://www.shuttle.rs/blog/2024/04/17/using-aws-s3-rust). Es interesante ver cómo agrega telemetría a través de trazas, examina pruebas y otras factores para llevar a producción una aplicación web en Rust.
Y hablando de telemetría, no podíamos dejar pasar la oportunidad para ver cómo utiliza [OpenTelemetry con Rust](https://www.shuttle.rs/blog/2024/04/10/using-opentelemetry-rust)
Y hasta aquí la edición de Mayo, nos vemos en unas semanas! Hasta entonces, sed buenos, happy coding! | iaasgeek |
1,868,918 | Garage Door Repair in Hoffman Estates, IL | Fix Your Garage Door FAST - JOE Garage Door Repair | https://www.youtube.com/watch?v=PLykjFMmER4 Is your garage door giving you grief? Is it stuck,... | 0 | 2024-05-29T11:35:42 | https://dev.to/christa_mcdermott_8fd948f/garage-door-repair-in-hoffman-estates-il-fix-your-garage-door-fast-joe-garage-door-repair-24mc | garage, door | https://www.youtube.com/watch?v=PLykjFMmER4
Is your garage door giving you grief? Is it stuck, noisy, or just plain broken? Don't waste time wrestling with it yourself! Here at JOE Garage Door Repair, we're the trusted experts in Hoffman Estates, IL for all your garage door needs. Our skilled technicians can fix any problem, big or small. From broken springs and openers to damaged panels and faulty sensors, we've got you covered. We use top-quality parts and guarantee our work, so you can be sure your garage door will be functioning safely and reliably for years to come. We offer fast, friendly service, and are available 24/7 for emergency repairs. Don't let a broken garage door disrupt your day. Call JOE Garage Door Repair today at (847) 809-2178 or visit us at https://www.joegaragedoor.com/garage-door-repair-in-hoffman-estates/ for a free quote. JOE Garage Door Repair - We fix your garage door problems, so you can get back to what matters most.
| christa_mcdermott_8fd948f |
1,868,917 | Understanding and Resolving QuickBooks Error 6470 | What is QuickBooks Error 6470? QuickBooks Error 6470 typically arises due to issues with the... | 0 | 2024-05-29T11:35:16 | https://dev.to/quick_fixstst_d05c02f05b5/understanding-and-resolving-quickbooks-error-6470-383 | **What is QuickBooks Error 6470?**
[QuickBooks Error 6470](https://quickfix1st.com/quickbooks-error-code-6470/) typically arises due to issues with the software's data files or conflicts within the company files. This error can prevent you from accessing certain features or performing essential tasks within QuickBooks.
**Common Causes of QuickBooks Error 6470**
Corrupt or Damaged Data Files: If your QuickBooks data files are corrupt or damaged, it can lead to error 6470.
Incorrect Company File Path: Errors in the company file path or misconfigurations can trigger this error.
Network Issues: Connectivity problems or issues with network configuration can also cause this error.
Software Conflicts: Conflicts with other applications or improper installation of QuickBooks updates can result in error 6470.
Symptoms of QuickBooks Error 6470
Difficulty in opening or accessing company files.
QuickBooks running slowly or freezing intermittently.
Unexpected shutdowns of the QuickBooks application.
Error message displaying "QuickBooks Error 6470" or "Qb Error 6470" on the screen.
Steps to Resolve QuickBooks Error 6470
Restart QuickBooks and Your Computer: Sometimes, a simple restart can resolve the issue. Close QuickBooks, restart your computer, and reopen QuickBooks.
Update QuickBooks: Ensure that your QuickBooks software is updated to the latest version. This can often resolve known issues and bugs.
Go to Help > Update QuickBooks Desktop.
Click on Update Now.
Select Get Updates.
Run QuickBooks File Doctor: QuickBooks File Doctor is a tool designed to diagnose and fix common issues with your company files.
Download and install QuickBooks Tool Hub.
Open the Tool Hub and select Company File Issues.
Click on Run QuickBooks File Doctor.
Check and Repair Data Files: Use the QuickBooks Verify and Rebuild Data tools to identify and repair any data integrity issues.
Go to File > Utilities > Verify Data.
If any issues are found, go to File > Utilities > Rebuild Data.
Configure Firewall and Security Settings: Ensure that your firewall or security software is not blocking QuickBooks.
Go to Control Panel > System and Security > Windows Firewall.
Select Allow an app or feature through Windows Firewall.
Ensure that QuickBooks is listed and allowed.
Reinstall QuickBooks: If the error persists, consider reinstalling QuickBooks.
Uninstall QuickBooks from your computer.
Download the latest version from the official QuickBooks website and reinstall it.
Seeking Professional Help
If you've tried all the above steps and are still encountering QuickBooks Error 6470, it may be time to seek professional assistance. Contact QuickBooks support or a certified QuickBooks ProAdvisor for expert help in resolving the issue.
**Conclusion**
Dealing with QuickBooks Error 6470, Qb Error 6470, or QuickBooks Error Code 6470 can be challenging, but understanding the causes and following systematic troubleshooting steps can help you get back on track. Keep your software updated, regularly back up your data, and seek professional assistance if needed to maintain smooth operations in your business. | quick_fixstst_d05c02f05b5 | |
1,868,916 | 🤷♀️Mastering HTML: The Ultimate Guide for Web Developers.🚀 | 1. HTML Tutorials: Your Starting Point HTML Basics What is HTML? HTML... | 0 | 2024-05-29T11:33:39 | https://dev.to/dharamgfx/mastering-html-the-ultimate-guide-for-web-developers-357m | webdev, beginners, html, programming | ## 1. HTML Tutorials: Your Starting Point
### HTML Basics
- **What is HTML?**
HTML (HyperText Markup Language) is the standard language for creating web pages. It structures content on the web.
- **Example:**
```html
<!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>This is a Heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
```
### Introduction to HTML
- **The Foundation of Web Development**
HTML forms the skeleton of all web pages. It uses elements like headings, paragraphs, and links to organize and display content.
- **Example:** The `<p>` element defines a paragraph.
```html
<p>This is a simple paragraph.</p>
```
## 2. Multimedia and Embedding: Enhancing User Experience
### Adding Multimedia
- **Images, Videos, and Audio**
HTML allows the inclusion of multimedia such as images, videos, and audio to make web pages more engaging.
- **Example:** Embedding an image.
```html
<img src="image.jpg" alt="Description of image">
```
- **Example:** Embedding a video.
```html
<video width="320" height="240" controls>
<source src="movie.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
```
- **Example:** Embedding an audio file.
```html
<audio controls>
<source src="audio.mp3" type="audio/mpeg">
Your browser does not support the audio element.
</audio>
```
### Embedding Content
- **Seamless Integration**
HTML makes it easy to embed external content like maps, social media posts, and videos using the `<iframe>` tag.
- **Example:** Embedding a YouTube video.
```html
<iframe width="560" height="315" src="https://www.youtube.com/embed/example" frameborder="0" allowfullscreen></iframe>
```
## 3. HTML Tables: Organizing Data
### Creating Tables
- **Structuring Information**
Tables are used to organize data in rows and columns.
- **Example:** A simple HTML table.
```html
<table border="1">
<tr>
<th>Heading 1</th>
<th>Heading 2</th>
</tr>
<tr>
<td>Data 1</td>
<td>Data 2</td>
</tr>
</table>
```
## 4. References: Your HTML Toolkit
### HTML Elements
- **Building Blocks**
Elements are the basic units of HTML. They are represented by tags.
- **Example:** The `<div>` element is a block-level container.
```html
<div>This is a division element.</div>
```
### Global Attributes
- **Universal Features**
Attributes like `class`, `id`, and `style` can be used with any HTML element to modify its behavior.
- **Example:** Using the `class` attribute.
```html
<p class="intro">This is an introductory paragraph.</p>
```
- **Example:** Using the `id` attribute.
```html
<p id="uniqueParagraph">This paragraph has a unique ID.</p>
```
- **Example:** Using the `style` attribute.
```html
<p style="color:blue;">This paragraph is styled with inline CSS to have blue text.</p>
```
### Headings (h1 to h6)
- **Defining Headings**
HTML provides six levels of headings, `<h1>` being the highest (or most important) and `<h6>` the least.
- **Example:**
```html
<h1>This is an h1 heading</h1>
<h2>This is an h2 heading</h2>
<h3>This is an h3 heading</h3>
<h4>This is an h4 heading</h4>
<h5>This is an h5 heading</h5>
<h6>This is an h6 heading</h6>
```
### Span
- **Inline Container**
The `<span>` tag is used for grouping inline elements.
- **Example:**
```html
<p>This is a paragraph with a <span style="color:red;">red highlighted</span> span.</p>
```
### Paragraph
- **Text Blocks**
The `<p>` tag defines a paragraph.
- **Example:**
```html
<p>This is a paragraph of text.</p>
```
### Links
- **Creating Hyperlinks**
The `<a>` tag defines a hyperlink.
- **Example:**
```html
<a href="https://www.example.com">Visit Example</a>
```
### Lists
- **Ordered Lists**
The `<ol>` tag defines an ordered list.
- **Example:**
```html
<ol>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ol>
```
- **Unordered Lists**
The `<ul>` tag defines an unordered list.
- **Example:**
```html
<ul>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ul>
```
### Images
- **Adding Visuals**
The `<img>` tag embeds an image.
- **Example:**
```html
<img src="path/to/image.jpg" alt="Description of image">
```
### Forms
- **Gathering User Input**
The `<form>` tag is used to create an HTML form for user input.
- **Example:**
```html
<form action="/submit-form" method="post">
<label for="name">Name:</label>
<input type="text" id="name" name="name"><br><br>
<input type="submit" value="Submit">
</form>
```
### Buttons
- **Interactive Elements**
The `<button>` tag defines a clickable button.
- **Example:**
```html
<button type="button">Click Me!</button>
```
### Div
- **Block Container**
The `<div>` tag defines a division or a section in an HTML document.
- **Example:**
```html
<div>
<h2>This is a heading in a div</h2>
<p>This is a paragraph in a div.</p>
</div>
```
### Input Types
- **Gathering Different Data**
The `<input>` element supports various types like text, email, and password to gather different types of user input.
- **Example:** A text input field.
```html
<input type="text" name="username">
```
- **Example:** An email input field.
```html
<input type="email" name="user_email">
```
- **Example:** A password input field.
```html
<input type="password" name="user_password">
```
### Metadata
- **Document Information**
The `<meta>` tag provides metadata about the HTML document.
- **Example:** Setting the character set.
```html
<meta charset="UTF-8">
```
### Scripts
- **Adding JavaScript**
The `<script>` tag is used to embed or reference JavaScript code.
- **Example:**
```html
<script>
console.log("Hello, world!");
</script>
```
### Links
- **Connecting Resources**
The `<link>` tag defines a relationship between the current document and an external resource.
- **Example:** Linking to a stylesheet.
```html
<link rel="stylesheet" href="styles.css">
```
### Styles
- **Inline Styling**
The `<style>` attribute allows you to apply CSS directly within an HTML element.
- **Example:**
```html
<p style="color:blue;">This paragraph is styled with inline CSS to have blue text.</p>
```
### Headings and Sections
- **Organizing Content**
- **Example:** Using headings from `<h1>` to `<h6>`.
```html
<h1>Main Heading</h1>
<h2>Sub Heading</h2>
<h3>Section Heading</h3>
<h4>Sub Section Heading</h4>
<h5>Minor Heading</h5>
<h6>Least Important Heading</h6>
```
## 6. HTML Guides: Best Practices
### Content Categories
- **Organizing Content**
HTML categorizes content into metadata, flow content, sectioning, heading, phrasing, embedded, interactive, and form-associated content.
- **Example:** Metadata content like `<meta>` tags provide information about the HTML document.
```html
<meta charset="UTF-8">
```
### Block-Level Elements
- **Defining Structure**
Block-level elements like `<div>`, `<h1>`, and `<p>` start on a new line and take up the full width available.
- **Example:** A `<div>` element containing a paragraph.
```html
<div>
<p>This is a paragraph inside a division.</p>
</div>
```
### Inline Elements
- **Flowing with Text**
Inline elements like `<span>`, `<a>`, and `<em>` do not start on a new line and only take up as much width as necessary.
- **Example:** An `<a>` element creating a hyperlink.
```html
<a href="https://www.example.com">Visit Example</a>
```
### Quirks Mode and Standards Mode
- **Rendering Modes**
Quirks Mode makes browsers behave like older versions for compatibility, while Standards Mode follows modern web standards.
- **Example:** Using a doctype declaration to trigger Standards Mode.
```html
<!DOCTYPE html>
```
### Date and Time Formats
- **Consistency in Display**
HTML provides standardized formats for dates and times using the `<time>` element.
- **Example:** Displaying a date in the `YYYY-MM-DD` format.
```html
<time datetime="2024-05-29">May 29, 2024</time>
```
### Constraint Validation
- **Form Data Integrity**
HTML5 supports constraint validation to ensure user input meets specific criteria.
- **Example:** Requiring an email format for input.
```html
<input type="email" required>
```
### Microdata
- **Enhanced Semantics**
Microdata is used to nest metadata within existing content to improve machine-readability.
- **Example:** Adding microdata to a product listing.
```html
<div itemscope itemtype="http://schema.org/Product">
<span itemprop="name">Product Name</span>
<span itemprop="price">$19.99</span>
</div>
```
### Microformats
- **Human-Readable Metadata**
Microformats are a way to use HTML classes and attributes to store metadata in web pages.
- **Example:** Using microformats for contact information.
```html
<div class="h-card">
<span class="p-name">John Doe</span>
<a class="u-email" href="mailto:johndoe@example.com">johndoe@example.com</a>
</div>
```
### Viewport Meta Tag
- **Responsive Design**
The viewport meta tag controls the layout on mobile browsers.
- **Example:** Setting the viewport for responsive design.
```html
<meta name="viewport" content="width=device-width, initial-scale=1.0">
```
### Allowing Cross-Origin Use of Images and Canvas
- **Security and Flexibility**
The `crossorigin` attribute enables images and canvas elements to be used across different domains securely.
- **Example:** Allowing cross-origin use for an image.
```html
<img src="https://example.com/image.jpg" crossorigin="anonymous">
```
By mastering these HTML concepts, you'll be well-equipped to create robust, user-friendly web pages that adhere to modern standards and best practices. Happy coding! | dharamgfx |
1,868,915 | 10 Best CSS Design Inspiration Sites for Creative Web Designers | In the ever-evolving world of web design, finding fresh and innovative inspiration is crucial for... | 0 | 2024-05-29T11:33:38 | https://dev.to/_topcssgallery/10-best-css-design-inspiration-sites-for-creative-web-designers-g0m | webdev, webdesign, designinspiration, webdesigner | In the ever-evolving world of web design, finding fresh and innovative inspiration is crucial for creating visually stunning and user-friendly websites. Whether you're a seasoned designer or just starting, having access to a collection of top CSS design inspiration sites can spark your creativity and help you stay on top of the latest trends. Here's a comprehensive list of the 10 best CSS design inspiration sites that every designer should bookmark.
## 1. TopCSSGallery
**Overview**:
TopCSSGallery is a go-to destination for web designers looking for top-notch CSS design inspiration. The site features a meticulously curated gallery of the best web designs from around the world, showcasing creativity and innovation in web development. Each design is handpicked, ensuring high quality and relevance to current design trends.
###Why It Stands Out:
**Curated Collection:** Every design on TopCSSGallery is handpicked for its aesthetic appeal and innovative use of CSS.
**User-Friendly Interface:** The site’s clean layout makes it easy to browse through the vast collection of designs.
**Regular Updates:** New designs are added regularly, ensuring that you always have access to the latest trends in web design.
##2. CSS Design Awards
**Overview:**
CSS Design Awards is a prestigious platform that honors outstanding CSS web designs. It's a perfect place for designers to find high-quality inspiration and get recognized for their work.
###Why It Stands Out:
**Award-Winning Designs:** Features only award-winning designs, ensuring top-tier inspiration.
**Detailed Case Studies:** Provides insights into the design process of winning entries.
**Community Voting:** Allows users to vote for their favorite designs, adding an interactive element.
##3. Awwwards
**Overview:**
Awwwards is one of the most renowned platforms for web design inspiration, recognizing the talent and effort of the best web designers, developers, and agencies in the world.
###Why It Stands Out:
**Global Recognition:** Recognizes and awards the best in web design globally.
**Educational Content:** Offers a wealth of articles, interviews, and resources for continuous learning.
**Diverse Categories:** Features a wide range of design categories, from minimalist to highly interactive sites.
##4. CSS-Tricks
**Overview:**
CSS-Tricks is a comprehensive resource for web designers, offering a vast collection of tutorials, articles, and design inspirations focusing on CSS.
###Why It Stands Out:
**In-Depth Tutorials:** Provides detailed tutorials on CSS and web design techniques.
**Community-Driven:** Features articles and tips from a community of web design enthusiasts.
**Versatile Content:** Covers a wide range of topics beyond just design inspiration.
##5. SiteInspire
**Overview:**
SiteInspire showcases the best web design projects from around the world, focusing on CSS and responsive design.
###Why It Stands Out:
**Diverse Range:** Features a wide variety of design styles and industries.
**Easy Navigation:** Offers a simple and intuitive browsing experience.
**Frequent Updates:** Regularly updates its gallery with new and inspiring designs.
##6. The Best Designs
**Overview:**
The Best Designs is a well-curated gallery of the most impressive web designs, providing a constant source of inspiration for designers.
###Why It Stands Out:
**High-Quality Submissions:** Only the best and most innovative designs make it to the gallery.
**Designer Profiles:** Offers insights into the designers behind the featured works.
**Filter Options:** Allows users to filter designs by style, color, and industry.
##7. Dribbble
**Overview:**
Dribbble is a community of designers sharing screenshots of their work, making it a fantastic source of inspiration for CSS design.
###Why It Stands Out:
**Community Engagement:** Encourages interaction and feedback among designers.
**Variety of Designs:** Showcases a vast range of design styles and disciplines.
**Job Listings:** Connects designers with potential job opportunities.
##8. Behance
**Overview:**
Behance, by Adobe, is a leading platform to showcase and discover creative work, including exceptional CSS design projects.
###Why It Stands Out:
**Professional Network:** Connects designers with potential clients and collaborators.
**High-Quality Projects:** Features projects from top designers and agencies.
**Creative Showcases:** Offers curated galleries and collections for easy browsing.
##9. Webdesign Inspiration
**Overview:**
Webdesign Inspiration is a resource for finding the latest trends and best examples of web design, focusing on aesthetics and functionality.
###Why It Stands Out:
**Trend Updates:** Keeps users informed about the latest design trends.
**Categorized Inspiration:** Allows users to browse designs by industry and style.
**User Contributions:** Accepts and features user-submitted designs.
##10. One Page Love
**Overview:**
One Page Love is dedicated to showcasing the best one-page websites, providing inspiration for designers looking to create simple yet impactful web designs.
###Why It Stands Out:
**Focused Niche:** Specializes in one-page website designs.
**User-Friendly:** Easy to navigate with a clean and simple layout.
**Comprehensive Examples:** Features a wide range of styles and industries.
##Conclusion
Finding the right inspiration can be a game-changer for web designers. These 10 CSS [web design inspiration](https://www.topcssgallery.com) sites provide a wealth of high-quality, innovative, and diverse examples that can help spark your creativity and elevate your designs. Whether you're looking for award-winning designs, user-submitted projects, or niche-specific inspiration, these sites have got you covered. Bookmark them, explore them, and let your creativity flourish. | _topcssgallery |
1,868,914 | Transformando a Maneira como Interagimos com Arquivos e Dados com ChatGPT | Introdução A nova funcionalidade do ChatGPT permite anexar arquivos diretamente do Google... | 0 | 2024-05-29T11:33:13 | https://dev.to/biosbug/transformando-a-maneira-como-interagimos-com-arquivos-e-dados-com-chatgpt-3pie | beginners, gpt3, devrel |
#### Introdução
A nova funcionalidade do ChatGPT permite anexar arquivos diretamente do Google Drive e Microsoft OneDrive, facilitando a análise e manipulação de dados.
#### Passo a Passo
1. **Abrir o ChatGPT:**
- Clique no ícone de anexo no canto inferior esquerdo.
2. **Escolher a Fonte:**
- Selecione Google Drive ou OneDrive.
3. **Selecionar Arquivo:**
- Escolha o arquivo que deseja anexar.
4. **Aplicar Prompt:**
- Utilize um prompt para processar o arquivo conforme necessário.
#### Exemplos de Prompts
- **Análise de Dados:**
- Analise o arquivo para identificar tendências principais.
- **Visualização de Dados:**
- Crie gráficos de barras para visualizar os dados.
- **Formatação de Dados:**
- Remova colunas em branco e duplicatas.
- **Exportação de Dados:**
- Exporte o resultado como CSV.
#### Dicas de Segurança
1. **Proteção de Dados:**
- Não insira informações sensíveis.
2. **Autenticação e Controle:**
- Limite o acesso ao ChatGPT.
3. **Monitoramento:**
- Realize auditorias regulares das interações.
#### Conclusão
Utilize essas novas funcionalidades para melhorar a eficiência, sempre garantindo a segurança dos dados. | biosbug |
1,868,911 | Mastering Financial Reporting: A Step-by-Step Guide to Implementing Tableau Dashboards | In the modern financial landscape, the ability to swiftly and accurately interpret data is crucial... | 0 | 2024-05-29T11:31:59 | https://dev.to/shreya123/mastering-financial-reporting-a-step-by-step-guide-to-implementing-tableau-dashboards-400g | tableau, tableaufinance, tableaudashboards | In the modern financial landscape, the ability to swiftly and accurately interpret data is crucial for making informed business decisions. Tableau, a leading data visualization tool, provides an intuitive platform to create dynamic dashboards that can transform raw financial data into insightful, interactive visualizations. Implementing Tableau dashboards for financial reporting involves several key steps to ensure that you leverage the full potential of this powerful tool. Here’s a guide to help you get started.
1. Define Your Financial Reporting Objectives
Before diving into Tableau, it's essential to clearly define your financial reporting objectives. Ask yourself the following questions:
What key financial metrics and KPIs do you need to monitor?
Who are the end-users of the dashboard (e.g., executives, finance team, stakeholders)?
What level of detail do they need?
Common financial metrics include revenue, expenses, profit margins, cash flow, and ROI. Defining your objectives will guide the design and functionality of your dashboard.
2. Prepare Your Data
Data preparation is a critical step in creating effective Tableau dashboards. Ensure your financial data is clean, accurate, and well-structured. Here are some tips:
Consolidate Data Sources: Gather all relevant financial data from various sources (e.g., ERP systems, spreadsheets, databases).
Data Cleaning: Remove duplicates, correct errors, and handle missing values.
Data Structuring: Organize data into a consistent format. Use meaningful column names and categorize data appropriately.
3. Connect to Data in Tableau
Tableau offers various options to connect to your data:
Live Connections: Connect directly to your data sources for real-time updates.
Extracts: Create a static snapshot of your data for faster performance and offline access.
Choose the connection type that best suits your needs. For financial reporting, live connections are beneficial for real-time data, while extracts can improve performance when handling large datasets.
4. Build Your Dashboard
Now, it's time to start building your dashboard. Follow these steps:
Create Worksheets: Each worksheet in Tableau represents a single visualization. Start by creating individual charts and graphs for each financial metric.
Use Calculated Fields: Leverage Tableau’s calculated fields to perform complex calculations and create new metrics directly within Tableau.
Design the Dashboard: Combine your worksheets into a single dashboard.
Drag and drop visualizations, adjust sizes, and arrange them to create a cohesive view.
5. Enhance User Experience
A well-designed dashboard should be intuitive and user-friendly. Consider these best practices:
Interactivity: Add filters, parameters, and actions to allow users to interact with the data. For example, create filters to view data by different time periods or drill down into specific regions.
Visual Appeal: Use consistent colors, fonts, and chart types. Avoid clutter and focus on the most important data points.
Annotations and Tooltips: Provide additional context with annotations and customized tooltips that explain key insights.
6. Test and Validate
Before rolling out your dashboard, conduct thorough testing:
Accuracy Check: Ensure all calculations and data representations are correct.
Performance Test: Test the dashboard’s performance with real data volumes. Optimize if needed by reducing the number of visualizations or using extracts.
User Feedback: Share the dashboard with a small group of users for feedback. Make necessary adjustments based on their input.
7. Deploy and Share
Once validated, deploy your dashboard:
Tableau Server/Tableau Online: Publish your dashboard to Tableau Server or Tableau Online for broader access.
Embedded Dashboards: Embed the dashboard into internal portals or applications for seamless access.
Scheduled Refreshes: Set up scheduled data refreshes to keep the dashboard up-to-date with the latest financial data.
Conclusion
[Implementing Tableau dashboards](https://www.softwebsolutions.com/tableau-consulting-services.html) for financial reporting can significantly enhance your ability to analyze and communicate financial data. By following these steps—defining objectives, preparing data, connecting to Tableau, building and enhancing dashboards, testing, and deploying—you can create powerful, interactive financial dashboards that drive better business decisions. Embrace the flexibility and power of Tableau to transform your financial reporting processes and unlock deeper insights into your financial performance. | shreya123 |
1,868,910 | Top 3 Myths in Software Project Management | Project Management is Only for Large Organizations: If you believe this then you might as well... | 0 | 2024-05-29T11:31:46 | https://dev.to/martinbaun/top-3-myths-in-software-project-management-4ceb | programming, productivity, learning, career | **Project Management is Only for Large Organizations:**
If you believe this then you might as well believe in mermaids too, coz In reality, project management principles can benefit projects of all sizes, from small startups to large enterprises.
**Project Management Guarantees Success:**
Some think that implementing project management practices automatically ensures project success.
Hahahaha, Project management provides structure and organization, success also depends on factors.
If you are just planning everything and not doing anything hoping it will result in success then you’re the same as the guy who believes unicorns are real.}
**Project Management is Strictly Linear:**
Project management doesn’t follow a rigid, step-by-step process from start to finish. In truth, project management often involves iteration, adaptation, and responding to changes throughout the project lifecycle more like Dragon flying from one place to another in a non-linear way.
-----
_Tired of not getting things done on time?_
I am hosting a webinar busting similar myths that will help you complete your tasks and help you achieve your goals in 2024.
Sign up [HERE](https://martinbaun.com/workshop00/#contact)
| martinbaun |
1,868,909 | Setup Netgear Nighthawk Wi-Fi Extender & Router | A Wi-Fi extender may be your best option if you occasionally encounter signal latency or inactive... | 0 | 2024-05-29T11:31:06 | https://dev.to/mywifiext_net_d616645b17a/setup-netgear-nighthawk-wi-fi-extender-router-j8a | A Wi-Fi extender may be your best option if you occasionally encounter signal latency or inactive spots while using the internet in your house.
The[ Netgear Wi-Fi extension](https://www-mywifiext-net.net/blog/netgear-nighthawk-extender-router-setup/) is currently all the rage, given its flawless range and ideal signal power. Additionally, it fixes the problem of sluggish internet speeds and offers continuous internet access throughout every inch of your property.
Consider this gadget if you frequently encounter Wi-Fi network problems as well. However, it would help if you became acquainted with its setting procedure. If not, your freshly purchased device will help you differently than you hoped.
Benefits of Setup Netgear Nighthawk Wi-Fi Extender & Router
The advantages of the Netgear Nighthawk Wi-Fi Extender & Router include the following:
Enhanced Wi-Fi Performance: The Nighthawk Extender & Router can improve wifi performance by directing the Wi-Fi signal specifically at your linked devices thanks to cutting-edge technologies like Beamforming+.
Multiple Device Support: Because the Nighthawk Wi-Fi Extender & Router supports various devices, you can link all of your smart devices, gaming systems, computers, and cell phones to the same Wi-Fi network.
Simple Setup: You can swiftly and efficiently set up your network options using the user-friendly interface of the Nighthawk Wi-Fi Extender & Router.
Advanced Security Features: To safeguard your network from unwanted access and secure your confidential information, the Nighthawk Wi-Fi Extender & Router is equipped with advanced security features like WPA2 encryption, visitor network access, and firewalls.
| mywifiext_net_d616645b17a | |
1,868,908 | KMP-102 - XCFramework para Devs KMP | KMP102 - XCFramework para Devs Kotlin Multiplataforma Olá! Dou as boas-vindas a série... | 27,547 | 2024-05-29T11:30:23 | https://dev.to/rsicarelli/kmp-102-xcframework-para-devs-kmp-4a4b | kotlin, kmp, ios, braziliandevs | ## KMP102 - XCFramework para Devs Kotlin Multiplataforma
Olá! Dou as boas-vindas a série KMP-102. Vamos aprofundar os conceitos do Kotlin Multiplatform, aprendendo mais sobre como integrar nosso código Kotlin no iOS e em outras plataformas.
Como início desta série, vamos aprender mais sobre um formato de arquivo especial para compartilhar código com a família Apple: o `XCFramework`.
### Introdução ao `.framework` da Apple
Um [framework](https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPFrameworks/Concepts/WhatAreFrameworks.html) é um pacote que contém um conjunto de recursos e código-fonte destinados a serem utilizados em projetos para a família Apple. No mundo da JVM, isso é equivalente a um `.jar` ou, no caso do Android, a um `.aar`.
Trata-se de um formato pré-compilado que pode ser utilizado livremente entre projetos no Xcode. Esse formato de arquivo facilita a criação de bibliotecas para dispositivos Apple, permitindo sua distribuição e utilização por meio de gerenciadores de pacotes, como CocoaPods ou o Swift Package Manager.
<p align="center">
<img src="https://developer.apple.com/library/archive/documentation/General/Conceptual/DevPedia-CocoaCore/Art/framework_2x.png" alt="AppKit.framework" width="450">
</p>
### Introdução ao XCFramework
O [XCFramework](https://developer.apple.com/documentation/xcode/creating-a-multi-platform-binary-framework-bundle) é um tipo de pacote ou artefato que facilita a distribuição de bibliotecas para a família Apple. Basicamente, ao invés de distribuirmos vários `.frameworks` para cada plataforma, temos um único `.xcframework` contendo múltiplos `.frameworks`, cada um representando uma plataforma específica suportada pela biblioteca.
O Kotlin Multiplataforma, mais especificamente o Kotlin/Native, utiliza este artefato para pré-compilar código Kotlin para Objective-C, garantindo total interoperabilidade com Swift. Com isso, nosso código Kotlin é facilmente compartilhado entre todos os alvos suportados do projeto, simplificando significativamente o processo de desenvolvimento: ao invés de compilar vários `.frameworks` para cada alvo suportado no KMP, compilamos apenas um `.xcframework` para cada alvo ou arquitetura de processador.
### Gerando um XCFramework no KMP
Por trás dos panos, o KGP (Kotlin Gradle Plugin) utiliza a toolchain do Xcode e nos oferece uma API que possibilita a criação de um `XCFramework` através dos nossos arquivos `build.gradle.kts`:
```kotlin
kotlin {
val xcFramework = XCFramework(xcFrameworkName = "KotlinShared")
listOf(
iosX64(),
iosArm64(),
iosSimulatorArm64()
).forEach { iosTarget ->
iosTarget.binaries.framework {
baseName = "KotlinShared"
isStatic = true
xcFramework.add(this)
}
}
}
```
Ao sincronizar o projeto, observamos que a task `assembleKotlinSharedXCFramework` foi registrada no nosso projeto. Observe que a task tem o miolo `KotlinShared`, que corresponde com o parâmetro `xcFrameworkName` da classe `XCFramework`:

### Analisando o resultado da tarefa assemble...XCFramework
Ao executarmos a task `assembleKotlinSharedXCFramework`, o Kotlin/Native gera os `.xcframeworks` para todos os alvos que definimos no `build.gradle.kts`.
Este artefato é exatamente o arquivo que precisamos vincular ao projeto Xcode para consumir nosso código KMP compilado para Objective-C!
> **Nota**: Tenha cuidado com o nome do projeto! Caracteres especiais, como "-", podem resultar em erro, apesar de o XCFramework ser gerado.
<p align="center">
<img src="https://github.com/rsicarelli/KMP-101/blob/main/posts/assets/xcframework-task-result.png?raw=true" alt="AppKit.framework" width="450">
</p>
## NativeBuildTypes: debug e release
Observe que temos dois frameworks gerados: a versão `debug` e a versão `release`. Esses dois tipos possuem características especiais, provenientes da classe [NativeBinaryType](https://github.com/JetBrains/kotlin/blob/master/libraries/tools/kotlin-gradle-plugin-api/src/common/kotlin/org/jetbrains/kotlin/gradle/plugin/mpp/NativeBinaryTypes.kt):
Analisando esse enum, entendemos que a versão `release` possui a flag `optimized = true` e `debuggable = false`, enquanto a versão `debug` possui `optimized = false` e `debuggable = true`.
Como você pode imaginar, devemos ter cuidado ao escolher qual `XCFramework` utilizar no fluxo de desenvolvimento:
- Para o ambiente de desenvolvimento local, a versão `debug` é a escolha ideal, pois permite debugar nosso código KMP.
- Para o ambiente de produção, a versão `release` é a escolha correta, pois o binário é otimizado e evita a inclusão de informações de debug no produto final.
```kotlin
// kotlin/libraries/tools/kotlin-gradle-plugin-api/src/common/kotlin/org/jetbrains/kotlin/gradle/plugin/mpp/NativeBinaryTypes.kt
enum class NativeBuildType(
val optimized: Boolean,
val debuggable: Boolean
) : Named {
RELEASE(true, false),
DEBUG(false, true);
}
```
## Controlando qual tipo de build gerar
A configuração para gerar os tipos de binário é proveniente da função `iosTarget.binaries.framework()`. Ao analisarmos a classe [AbstractKotlinNativeBinaryContainer](https://github.com/JetBrains/kotlin/blob/master/libraries/tools/kotlin-gradle-plugin/src/common/kotlin/org/jetbrains/kotlin/gradle/dsl/AbstractKotlinNativeBinaryContainer.kt), observamos que a função `framework()` possui um argumento `buildTypes` com um valor padrão.
```kotlin
// kotlin/libraries/tools/kotlin-gradle-plugin/src/common/kotlin/org/jetbrains/kotlin/gradle/dsl/AbstractKotlinNativeBinaryContainer.kt
fun framework(
namePrefix: String,
buildTypes: Collection<NativeBuildType> = NativeBuildType.DEFAULT_BUILD_TYPES,
configure: Framework.() -> Unit = {}
) = createBinaries(namePrefix, namePrefix, NativeOutputKind.FRAMEWORK, buildTypes, ::Framework, configure)
// kotlin/libraries/tools/kotlin-gradle-plugin-api/src/common/kotlin/org/jetbrains/kotlin/gradle/plugin/mpp/NativeBinaryTypes.kt
enum class NativeBuildType(...) : Named {
...
companion object {
val DEFAULT_BUILD_TYPES = setOf(DEBUG, RELEASE)
}
}
```
Durante o fluxo de desenvolvimento, pode ser desejável evitar a compilação das duas versões devido ao aumento do tempo de compilação. Para isso, basta adaptar nosso `build.gradle.kts`:
```kotlin
kotlin {
val compileOnlyDebug = true // some gradle.properties flag will help you here!
val buildType = if (compileOnlyDebug)
NativeBuildType.DEBUG
else NativeBuildType.RELEASE
listOf(
iosX64(),
iosArm64(),
iosSimulatorArm64()
).forEach { iosTarget ->
iosTarget.binaries.framework(
buildTypes = listOf(buildType)
) {
baseName = "KotlinShared"
isStatic = true
xcFramework.add(this)
}
}
}
```
## Conclusões
O XCFramework é um tema central no universo do Kotlin Multiplatform (KMP). Compreender o que é, como funciona e como gerá-lo nos proporciona um maior controle e compreensão dos bastidores do KMP.
No próximo artigo, exploraremos melhor a função `framework()`!
## Fontes
- [KotlinLang | Build final native binaries](https://kotlinlang.org/docs/multiplatform-build-native-binaries.html)
- [Embracing the Power of XCFrameworks: A Comprehensive Guide for iOS Developers](https://medium.com/@mihail_salari/embracing-the-power-of-xcframeworks-a-comprehensive-guide-for-ios-developers-77fe192d47fe)
| rsicarelli |
1,854,504 | How to create a slick CSS animation from Alien | The title sequence for Alien is iconic. It sets the mood of the movie perfectly. Let's see how we can... | 18,255 | 2024-05-29T11:30:00 | https://www.roboleary.net/2024/05/15/alien-title-sequence.html | webdev, css, animation |
The title sequence for Alien is iconic. It sets the mood of the movie perfectly. Let's see how we can recreate it as a web animation!
## TLDR
You can watch [the title sequence on YouTube](https://www.youtube.com/watch?v=7BYzzast0jw).
Here is [the finished animation](https://codepen.io/robjoeol/pen/ZEZXabR).
{% codepen https://codepen.io/robjoeol/pen/ZEZXabR %}
## About the title sequence
The title is unsettling. It opens with a still scene of far away shot of planet and the camera is slowly panning across it. Slowly some disjointed bits of the title fade into view turning from a bluish hue to white. There is exaggerated spacing between the letters, so that when the bits of the letters finally resolve to a word, it still feels odd to recognize it as a word. It is backed by a moody instrumental.
The typeface is san serif. In [an interview with The Art of The Title](https://www.artofthetitle.com/title/alien), the Title Designer Richard Greenberg says the following about the typeface:
> It’s probably a slight variation on Futura, but it wasn’t custom. It was incredibly simple, but it struck a chord. Maybe because it was attached to one of the most frightening movies ever made!
The typeface probably is Helvetica Black. I have used a similar font called [HomepageBaukasten Bold](https://www.fontspace.com/homepagebaukasten-bold-font-f24942) in my implementation.
You can read [the full interview with Richard Greenberg](ttps://www.artofthetitle.com/title/alien) to learn more about the design of the title sequence.
## The animation
The duration of the title sequence is 2 minutes. The animation has 3 parts:
1. Background pan - The background is panning slowly to the right. When it reaches a point far to the right, it fades out. This runs for the duration of the title sequence.
1. Credit reveal - Credits of the crew are faded in and out. I only included a credit for for Ridley Scott to fill the void at the beginning. I wanted to keep it as simple as possible. This occurs 5 seconds into the title sequence, and has a duration of 4 seconds.
1. Title reveal - The title "ALIEN" is revealed segment by segment of each letter. This beings 12.5 seconds into the title sequence. A new segment is revealed approxmiately every 4 seconds.
A good starting point is to set up some CSS variables to mark out some key values to build a timeline around. It is the third part that has the most going on and has element relying on each other. This is what I set-up:
```css
:root {
/* begining point of title reveal */
--animation-delay-title-reveal: 12.5s;
/* the delay between animation of each segment in title reveal */
--animation-delay-segment: 4s;
/* the duration of animation of each segment in title reveal */
--animation-duration-segment: 3.75s;
}
```
Let's go through each animation part now.
### Part 1 - The background pan animation
The key to getting this background pan animation right is the having an eye-catching image with a decent resolution. We only want to display a cross-section of the image zoomed in. Therefore it needs to look good up close.
I created the background image using a composite of 2 space images I found from Unsplash. The [main image](https://unsplash.com/photos/solar-eclipse-7YiZKj9A3DM) is a solar ecliplse. I painted in the distinctive orangish hue and touched it up to get it closer to the original.

Below is the finished background image. The green box in the figure below shows the section we display as the background.
<figure>

<figcaption>The middle of the image is used as the background, desingated by the green box</figcaption>
</figure>
We want the background to be responsive but show that same cross section across all viewport sizes. Similar to a movie player, we can create a letterboxed appearance on smaller screen sizes.

To do this we make the `body` a grid and center the "title" `div` that has the background image. We give it a fixed aspect ratio (1920:1080) that matches the background image. We zoom in on the section by doubling the size of the background `background-size: 200%;` and center it vertically using `background-position: 0% 50%;`.
```css
body {
display: grid;
overflow: hidden;
height: 100dvh;
margin: 0;
background-color: black;
place-items: center;
}
.title {
width: 100%;
max-width: 1920px;
aspect-ratio: 1920 / 1080;
background: black;
background-image: url("img/bg.webp");
background-size: 200%;
background-position: 0% 50%;
}
```
The key to the animation is that the image has overflowed the viewport -- half of it is out of view on the right-hand side. To animate it, we want to change the X value of `background-position` to shift the background horizontally.
```css
@keyframes bg-scroll {
to {
background-position: 200% 50%;
}
}
```
Here is a simplified example of this animation part.
{% codepen https://codepen.io/robatronbobby/pen/bGydYEN %}
In the final version, a second animation is added to fade out the title at the end of the sequence.
```css
.title {
animation-name: bg-scroll, fadeout;
animation-duration: 210s, 1s;
animation-timing-function: linear, ease-out;
animation-fill-mode: forwards, forwards;
animation-delay: 0s, 120s;
}
@keyframes fadeout {
to {
opacity: 0;
}
}
```
You may notice that the duration of the `bg-scroll` animation is over 3 and half minutes (210 seconds). Why is that? The easing is linear, we want a constant rate for the pan. However to compensate for not having the perfect image, I increase the `animation-duration` so that it pans close to the edge of at a slow constant pace. The second animation `fadeout` effectively hides the entire title at 2 minutes. Probably the better way to get to a perfect result would be to tweak the values for `background-size` and `background-position`, I just found this the quickest route to getting the desired outcome.
### Part 2 - Credit reveal animation
We to show the credit for "Ridley Scott" for 3+ seconds and then hide it. We are animating the `opacity` of the element. Nothing unexpected here I would say!
```css
p {
animation: show-credit 4s;
animation-timing-function: ease-in-out;
animation-delay: 5s;
}
@keyframes show-credit {
10%,
100% {
opacity: 1;
}
}
```
### Part 3 - The title reveal animation
In order to animate the title, we need cut up the letters into individual segments. This is the tricky part.
The actual animation is fairly straightforward. We are transitioning the `background-color` from transparent to a blueish hue to white finally.
Initially, I animated `opacity` also, but I found setting the intial `background-color` as transparent and changing the value to blue and then white matched the original.
```css
@keyframes reveal-bg-color {
0% {
background-color: transparent;
}
33%,
100% {
background-color: blue;
}
100% {
background-color: white;
}
}
```
It was just a case of playing with the keyframes and easing to get the exact feel right.
We can use the aforementioned CSS variables to place each segment animation at the correct point in the timeline using `animation-delay`. We do a calcuation using these variables and multiply it by an ordinal number as below:
```css
/* second segment to show - diagonal leg of N */
.letter:nth-of-type(5) i:nth-of-type(3) {
animation-delay: calc(
var(--animation-delay-title-reveal) +
(var(--animation-duration-segment) * 2) +
(var(--animation-delay-segment) * 2)
);
}
/* 3rd segment to show - vertical bar of L */
.letter:nth-of-type(2) i:nth-of-type(1) {
animation-delay: calc(
var(--animation-delay-title-reveal) +
(var(--animation-duration-segment) * 3) +
(var(--animation-delay-segment) * 3)
);
}
```
If you want to learn how to cut up the letter into segments, read on.
#### Cutting up the letters into segments
As a starter, let's get the size and spacing of the title correct. The title has a small margin on the top and sides. The letters are evenly and widely distributed in the available space.

Let's take the basic HTML where we have each letter as its own element.
```html
<h1>
<div>A</div>
<div>L</div>
<div>I</div>
<div>E</div>
<div>N</div>
</h1>
```
We use a grid on the `h1` to evenly divide the space with each grid item (letter) taking up 1 fractional unit (fr).
```css
h1{
display: grid;
grid-template-columns: repeat(5, 1fr);
justify-items: center;
margin: 0 10%;
margin-block-start: 1.25rem;
/* other styles */
}
```
Here is what we got so far...
{% codepen https://codepen.io/robatronbobby/pen/vYwYweV %}
The 4 options for cutting up the letters are:
1. Create 3 versions of each letter stacked up. You can duplicate each letter with the `::before` and `::after` pseudo-elements sourcing the letter from a `data-letter` attribute e.g. `<div data-letter="A">A</div>`. Then, you can cut out a portion of the letter with `clip-path` to have 3 referencable segments of a letter. The limitation is that the letter 'E' requires 4 segments!
1. You can create the segments of the letter using multiple `background-image` instances utilising various gradients to create the shapes. Animating `background-image` is more challenging as you may need to repeat values for keyframes.
1. You could nest multiple elements into each letter `div` to represent a letter part and style each one. It could be any arbitary element such `i` to keep it short. It is more straightforward to animate an element rather than a background image. I think this is the easier option.
1. You could create the title as a `text` element in a SVG. Then transform each letter to a `path`. Then, you can divide each `path` into the segments required (more `path` elements). In the [Killing Eve title sequence](https://www.roboleary.net/2020/12/24/title-sequences.html) in this series, I converted a `text` element into individual `path` elements. You can read that if you are curious about that process.
Personally, I find it is the easiest editing the title as a SVG in Inkscape (approach 4). However, I will go for approach 3 to demonstrate how we can stick with HTML and do it all in the browser. The markup looks like this:
```html
<h1>
<!--since A has 3 segments, it contains 3 i elements -->
<div>A<i></i><i></i><i></i></div>
<!--since L has 2 segments, it contains 2 i elements -->
<div>L</div>
<!--other letters-->
</h1>
```
We want to position each `i` element absolutely, relative to each letter `div`. We can then style each `i` as an overlay on top of the letter.
```css
h1 div {
position: relative;
}
h1 div i {
position: absolute;
width: 100%;
height: 100%;
left: 0;
top:0;
/* hide the ones we are not interested in */
opacity: 0;
}
h1 div i:nth-child(1) {
/* this one is visible and can see letter underneath */
background: green;
opacity:0.3
}
```
Now we have a green overlay like below.

What I found easiest is to use a polygon `clip-path` and edit it in the devtools (Firefox in my case). For example I will add a triangle polygon to the element: `clip-path: polygon(50% 0%, 0% 100%, 100% 100%);`. We want the clip path points to be in percentages to ensure that it is reponsive.
In the devtools, I can position each of the points to match the outline of the left arm of the letter A. I need to add a fourth point by double-clicking on the line to match the shape. You can see the process in action in the video below:
<video poster="https://www.roboleary.net/assets/img/blog/2024-05-15-alien-title-animation/poster.webp" preload="none" controls>
<source src="https://www.roboleary.net/assets/img/blog/2024-05-15-alien-title-animation/clipping-letter.webm" type="video/webm" />
</video>
This is the codepen with the segment of the letter *A* defined:
{% codepen https://codepen.io/robatronbobby/pen/eYaYwEK %}
This process needs to be repeated for each segment. Approximately a dozen times. Once this is done, you can set the `color` of the `h1` to transparent and then animate each segment in the correct order.
I have left the text content for each letter. I used `clamp()` with the `font-size` to make the title responsive to the viewport.
Alternatively, you could remove the text content. You would need to set the `width` and probably an `aspect-ratio` to emulate the x-height of the text.
## Final thoughts
There was quite a bit of work for this one. Anything that requires preparing images is more time-consuming. While the actual keyframe animations are fairly straightforward, the associated styling required some experimentation to get everything coordinated. The background pan animation is intriguing, I may employ it in a regular webpage somewhere sometime. It is a new trick to add to my collection.
## Source code
The source code is available in [this github repo](https://github.com/robole/title-sequences).
You can check out all of the animations of this series in [this codepen collection](https://codepen.io/collection/nNmwgP).
---
Written by [Rob O'Leary](https://www.roboleary.net)
[Subscribe to web feed](https://www.roboleary.net/feed.xml) for the latest articles.
<small>© Rob OLeary 2024</small>
| robole |
1,868,907 | 5 Mistakes to Avoid While Doing SIT Testing | It is important to ensure the seamless integration and functionality between the different segments... | 0 | 2024-05-29T11:29:14 | https://tfipost.com/2024/04/5-mistakes-to-avoid-while-doing-sit-testing/ | sit, testing | 
It is important to ensure the seamless integration and functionality between the different segments of the system. SIT testing is a vital step to notice and repair any problems to be caused by different subsystems or modules connected with each other. There are certain aspects that need particular attention, which, if not taken care of, might result in serious errors affecting the overall quality and dependability of a software product. In order to reduce risk, a good start is to recognize common mistakes and proactively take steps to prevent them.
1. **Inadequate Test Planning and Preparation**
Careful test design and preparation are the first steps in an effective SIT process. Not devoting enough time and resources to these duties has a number of hazards. Inadequate testing preparation can lead to incomplete test cases, unseen circumstances, doubtful objectives, and ambiguous exit criteria. It’s critical to ensure that all necessary environments, data, and components are readily available to reduce delays and the possibility of missing dependencies.
2. **Insufficient Test Coverage**
One of the primary objectives of SIT is to confirm the ways in which various system components interact and integrate. Important problems may go undetected if specific conditions or interfaces are ignored. It is essential to identify every potential interconnection point and conduct a comprehensive analysis of the system architecture. Testing that is inadequate due to insufficient test coverage may overlook vulnerabilities and compatibility issues.
3. **Lack of Communication and Collaboration**
In SIT, every team is responsible for a different part or subsystem of the system. Several teams working together are vital for the success of integrated testing. A failure to cooperate or converse adequately can lead to defects, discrepancies, and wasted work. Setting up an atmosphere conducive to mutual efforts and keeping the channels of communication open guarantee that all stakeholders agree on priorities for testing goals.
4. **Inadequate Test Data Management**
Test data is crucial to SIT because it validates how the system behaves in various scenarios and replicates actual occurrences. Inadequate maintenance and management of test data might lead to inconsistent or inaccurate results. Inadequate test data coverage, outdated or missing data sets, and poor data masking or obfuscation could compromise the integrity of the testing procedure and potentially expose private information.
5. **Overlooking Non-Functional Requirements**
Even though functional testing is prioritized during SIT, ignoring non-functional criteria could have disastrous consequences. A thorough evaluation of performance, security, usability, and accessibility is necessary. If these non-functional needs are not taken into consideration during SIT, a system that meets functional standards but fails in other critical areas could have a detrimental effect on the overall user experience and system reliability.
**Conclusion**
A crucial stage of the software development lifecycle, SIT verifies the smooth functioning and integration of diverse system components. Opkey’s robust automation package can help companies streamline SIT. For example, in an e-commerce scenario where inventory data needs to be synchronized between NetSuite and Shopify, SIT guarantees flawless interaction throughout the ecosystem. Opkey’s sophisticated test generation, execution, and analysis tools make this difficult process simpler. Anyone can quickly develop and execute end-to-end tests spanning many systems thanks to its no-code interface. Opkey offers consistent reporting across vendors, automates the creation of data, and populates test scenarios with actual user experiences. To guarantee that the mission-critical integrations run well, companies should base SIT strategy around Opkey. They can reduce time-to-market and create reliable, cohesive software at the same time. | rohitbhandari102 |
1,868,906 | Snowflake Consulting Partner | Teqfocus | Teqfocus, a Premier Snowflake Service Partner, empowers data-driven growth. Get expert guidance on... | 0 | 2024-05-29T11:29:10 | https://dev.to/teqfocus/snowflake-consulting-partner-teqfocus-3428 | Teqfocus, a Premier **[Snowflake Service Partner](https://www.teqfocus.com/snowflake-consulting-partner/)**, empowers data-driven growth. Get expert guidance on Snowflake implementation, migration, and architecture. Partner with us today!
**[Snowflake Consulting Services](https://www.teqfocus.com/snowflake-consulting-partner/)**
| teqfocus | |
1,868,905 | Understanding database indexes - A simple analogy | TL;DR Think of how a DB index works as how a dictionary works: how it allows you to... | 0 | 2024-05-29T11:26:15 | https://dev.to/yannick555/understanding-database-indexes-a-simple-analogy-36be | database, softwareengineering, algorithms, datastructures | ## TL;DR
Think of how a DB index works as how a dictionary works: how it allows you to quickly find any word's definition.
## Introduction
When diving into the world of databases, one of the key concepts you'll encounter is the database index.
To make this idea more relatable, let's compare it to something children learn to use at about ages from 7 to 9 and which we all know well—a word definition dictionary.
By the end of this post, you'll see how this familiar tool can help you understand all types of database indexes.
## The Dictionary Analogy
Think about a dictionary for a moment. When you need to find the definition of a word, you don't start reading from the first page, hoping to stumble upon it. Instead, you take advantage of the dictionary's organisation. Here’s how it works:
1. **Alphabetical Order**: Words in a dictionary are sorted alphabetically. This sorting is crucial because it allows you to quickly jump to the section where the word should be.
2. **Index at the Top of the Page**: Each page in the dictionary typically shows the first and last word on that page, giving you a quick reference to know if the word you’re looking for might be on that page.
3. **Quick Lookup**: With this structure, you can find a word and its definition in seconds rather than minutes.
### What If Words Were Not Sorted?
#### Linear search
Imagine if the words in the dictionary were not sorted alphabetically. You’d have to start at the beginning and check each word one by one until you find the one you’re looking for. This method is called a **linear search**.
1. Start at the first word.
2. Check if it’s the word you need.
3. If not, move to the next word.
4. Repeat until you find the word.
In the worst-case scenario, if the word you're searching for is the last word in the dictionary, you would have to check all 5,000 words. Therefore, the maximum number of operations in the case of the linear search is 5,000.
#### Other search strategies
With other search strategies, like choosing a random word and repeating until you have picked the right one, the number of operations may be asymptotically infinite.
If you decided to choose a random word from the list of words that you have not picked yet, the maximum number of operations would be 5000, but you would also have to maintain a list of the words you already picked. This would neither be efficient in time nor in space (memory).
### Operations in a Sorted Dictionary
Let’s consider a dictionary with 5,000 words sorted alphabetically.
Because the words are sorted alphabetically, you can employ a **binary search** to find a word. Binary search is an efficient algorithm that repeatedly divides the search interval in half:
1. Start by looking at the middle word of the dictionary.
2. If the word you're looking for comes before the middle word, narrow your search to the first half of the dictionary.
3. If the word comes after the middle word, narrow your search to the second half.
4. Repeat this process until you find the word.
In a binary search, each step cuts the number of possible locations in half. Therefore, the maximum number of steps (or operations) needed is the logarithm of the number of words to the base 2, denoted as {% katex inline %}log_{2}(5000){% endkatex %}.
{% katex %}
log_{2}(5000) ≈ 12.29
{% endkatex %}
This means you can find any word in approximately 13 operations at most.
**Notes:**
- As we know the first letter of the word we search, we usually already approximate the position of the word when opening the dictionary: obviously, no need to start looking in the middle of the dictionary for a word starting with letter "c". This usually allows us to remove a few search steps, reducing further the maximum number of steps to find a word to less than 12 in the case of a dictionary with 5000 words.
- As the number of words in the dictionary increases, the gains increase as well because of the logarithmic scale, which increases at a slower rate than the size of the dictionary (compare {% katex inline %}n{% endkatex %} to {% katex inline %}log_{2}(n){% endkatex %}). The gains provided by an index become greater as the number of entries grows. This is a key insight to understanding why queries on large tables (>> 100k rows) are still extremely fast despite the huge amount of data: {% katex inline %}log_{2}(10^5) ≈ 16.61{% endkatex %}, {% katex inline %}log_{2}(10^9) ≈ 29.9{% endkatex %}.
### Comparison
- **Sorted Dictionary (Binary Search)**: Approximately 13 operations.
- **Unsorted Dictionary (Linear Search)**: Up to 5,000 operations.
Now, let’s translate this process to the realm of databases.
## Translating to Databases
### What is a Database Index?
A database index works similarly to the sorted dictionary, making it faster to find specific data in a database. Here’s how the analogy fits:
- **Sorted Order**: Just like the words in a dictionary, data in an index is sorted in a specific order. This might be alphabetical, numerical, or based on another attribute.
- **Index Entries**: In a database, an index contains entries that are analogous to dictionary entries. Each entry in the index contains a key (like the word in a dictionary) and a reference to the actual data (like the definition in a dictionary).
- **Efficient Lookup**: When you search for data, the database uses the index to quickly locate the information, much like you use the dictionary’s alphabetical order to find a word.
Using an index in a database is akin to using a binary search in a sorted dictionary—it's efficient and significantly reduces the number of operations required to find the desired data. Without an index, the database would need to perform a linear search, scanning each row until it finds the matching data, which is far less efficient.
**Note:**
- This shows that a dictionary actually is an index.
### Types of Database Indexes
Let’s generalise this concept to different types of database indexes:
1. **B-Tree Indexes**:
- **How It Works**: Similar to the alphabetical sorting in a dictionary, B-Tree indexes keep data sorted in a balanced tree structure. This ensures that the database can quickly navigate through the data to find what you need.
- **Use Case**: Ideal for range queries and when you need to sort data.
2. **Hash Indexes**:
- **How It Works**: Instead of sorting data, hash indexes use a hash function to distribute data evenly across a set of buckets. Think of it like using a special code to find the word directly rather than flipping through pages.
- **Use Case**: Perfect for exact match queries, where you need to find a specific entry quickly.
3. **Bitmap Indexes**:
- **How It Works**: Bitmap indexes use bitmaps (arrays of bits) to represent the presence of a value or combination of values. It’s like having multiple dictionaries where each dictionary represents a specific attribute of the words.
- **Use Case**: Useful in data warehousing for complex queries involving multiple attributes.
4. **Full-Text Indexes**:
- **How It Works**: These indexes are designed to handle full-text search capabilities. They index words and phrases, much like a detailed glossary or concordance.
- **Use Case**: Great for applications requiring search within large text fields, like articles or books.
### Why Are Indexes Important?
Just as you wouldn’t want to read a dictionary cover to cover to find a word, you don’t want your database to scan through every record to find a piece of data. Indexes improve the efficiency and performance of your database queries, ensuring that information can be retrieved quickly and efficiently.
Without indexes, there is no magic: the database is forced systematically read each entry one by one until it finds a hit, potentially the last entry. This not only takes more time, it also consumes more CPU time and requires more I/O operations.
### Conclusion
A database index, much like a dictionary, is an essential tool for efficient information retrieval. Whether you're using a B-Tree index, a hash index, a bitmap index, or a full-text index, the underlying principle remains the same: organising data in a way that makes it easy to find. By understanding this concept through the familiar analogy of a dictionary, you can better appreciate how indexes optimise database performance and make data retrieval a breeze. | yannick555 |
1,868,904 | Major Playground Slot Online: A Comprehensive Guide | In the ever-evolving world of online gambling, slot games have emerged as a cornerstone of digital... | 0 | 2024-05-29T11:26:07 | https://dev.to/jacob_hunt_c56ec9fcf53d77/major-playground-slot-online-a-comprehensive-guide-57me | In the ever-evolving world of online gambling, slot games have emerged as a cornerstone of digital entertainment. One of the most prominent platforms offering these games is Major Playground Slot Online. This article delves into the various aspects that make Major Playground a popular choice among slot enthusiasts, providing insights into its features, game variety, security measures, and user experience.
The Allure of Major Playground Slot Online
A Diverse Array of Games
Major Playground Slot Online boasts an impressive catalog of slot games catering to all kinds of players. Whether you're a fan of classic three-reel slots or the more complex video slots with multiple paylines and bonus features, Major Playground has something for everyone. The platform collaborates with top-tier software providers like Microgaming, NetEnt, and Playtech, ensuring high-quality graphics, immersive sound effects, and smooth gameplay.
User-Friendly Interface
One of the standout features of Major Playground is its user-friendly interface. The platform is designed to be intuitive, making it easy for both novice and experienced players to navigate. From the moment you log in, the homepage showcases the latest and most popular slot games, along with categories that help you quickly find your preferred types of slots. Additionally, the site is fully optimized for mobile devices, allowing players to enjoy their favorite games on the go <a href="https://www.bsc.news/post/anjeonhan-bojeungeobce-meijeonoliteo-cuceon-mogrog-anjeonnoliteo-sunwi-seulroscuceon-ggongmeoni">메이저놀이터</a>.
Security and Fair Play
Rigorous Security Measures
In the online gambling industry, security is paramount. Major Playground Slot Online employs state-of-the-art encryption technology to protect players' personal and financial information. This ensures that all transactions and data transfers are secure from potential cyber threats. Furthermore, the platform is licensed and regulated by reputable gambling authorities, which mandates strict adherence to security protocols and fair gaming practices.
Fair Gaming Practices
Fairness in gaming is another critical aspect that Major Playground emphasizes. The platform uses Random Number Generators (RNGs) to ensure that all game outcomes are completely random and unbiased. This technology is regularly audited by independent third-party organizations to verify its integrity. Players can trust that they are getting a fair chance at winning every time they spin the reels.
Promotions and Bonuses
Attractive Welcome Bonuses
To attract new players, Major Playground offers enticing welcome bonuses. These often include a combination of free spins and deposit match bonuses, giving newcomers a substantial boost to their starting bankroll. These promotions not only enhance the gaming experience but also provide players with more opportunities to win without risking too much of their own money initially.
Ongoing Promotions
In addition to welcome bonuses, Major Playground Slot Online runs regular promotions and loyalty programs for existing players. These can range from weekly cashback offers and free spin bonuses to exclusive VIP rewards. Such promotions are designed to keep the gaming experience fresh and rewarding, encouraging players to stay active on the platform.
Customer Support and Responsible Gaming
Efficient Customer Support
Customer support is a crucial component of any online gaming platform. Major Playground excels in this area by offering 24/7 customer service through multiple channels, including live chat, email, and phone support. The support team is well-trained and responsive, ready to assist with any issues or queries players might have, ensuring a seamless gaming experience.
Commitment to Responsible Gaming
Major Playground is committed to promoting responsible gaming. The platform provides various tools and resources to help players manage their gambling activities. These include setting deposit limits, self-exclusion options, and access to professional help for those who may need it. By fostering a safe and responsible gaming environment, Major Playground demonstrates its dedication to the well-being of its players.
Conclusion
Major Playground Slot Online stands out as a premier destination for slot enthusiasts, offering a comprehensive and secure gaming experience. With its vast selection of games, robust security measures, attractive promotions, and dedicated customer support, it caters to the needs of all types of players. Whether you're a casual gamer or a serious slot aficionado, Major Playground provides a platform where you can enjoy thrilling and fair gameplay while feeling confident about your safety and security. As the online gambling landscape continues to evolve, Major Playground remains a reliable and exciting choice for slot gaming enthusiasts worldwide.
| jacob_hunt_c56ec9fcf53d77 | |
1,868,903 | Future my game development | Current I'm post graduate student in the UK and study game development. I'm going to... | 0 | 2024-05-29T11:23:30 | https://dev.to/takeda1411123/future-my-game-development-1md3 | # Current
I'm post graduate student in the UK and study game development.
I'm going to create below games in the future.
1. Kick The Can
2. The Sunlight
3. AR Game
## Kick The Can
This game is inspired by the traditional game of kick-the-can that everyone played during their childhood. I will start by developing a simple 2D version of the game and work towards its release.
## The Sunlight
This game is inspired by the idea of everyone in the UK seeking sunlight. I will also start by developing a simple 2D version of the game and work towards its release.
## AR Game
This game is planned to be developed using AR. It will be designed for use in the tourism industry. Details will be updated regularly in this blog.
To be continued...
| takeda1411123 | |
1,868,902 | Need software development | We are a large interpreting company needing a scheduling software build for over 800 interpreters... | 0 | 2024-05-29T11:23:26 | https://dev.to/lindad/need-software-development-408o | We are a large interpreting company needing a scheduling software build for over 800 interpreters with 50-80 appts per day. We currently use Aqua Schedules but need to customize our own and have a mobile app. Must integrate with quickbooks online. Can you help us? | lindad | |
1,868,901 | How To Display JavaScript Objects | While primitives in JavaScript are values themselves, everything else, including arrays and... | 0 | 2024-05-29T11:22:05 | https://dev.to/thejoernal/how-to-display-javascript-objects-1jbi | javascript, webdev, dom, tutorial | While primitives in JavaScript are values themselves, everything else, including arrays and functions, **are objects**. Understanding how to leverage objects in the Document Object Model *(DOM)* is vital for effective web development.
This is a guide on different methods of displaying Objects when working with the DOM.
____
Displaying a JavaScript Object will output `[object Object]` by default:
```js
const mySelf = {
name: "Joe",
age: 21,
city: "Bangalore"
};
document.getElementById("demo").innnerHTML = mySelf; //[object Object]
```
This can however be **maneuvered** and solved with a couple of workarounds:
1. Displaying Object properties by name
2. Displaying Object Properties in a Loop
3. Displaying Object using Object.values()
4. Displaying Object using Object.entries()
5. Displaying Object using JSON.stringify()
## Display the Object Properties
Object properties can be displayed as a string:
```js
// create object
const mySelf = {
name: "Joe",
age: 21,
city: "Bangalore"
};
// display properties
document.getElementById("demo").innerHTML = mySelf.name + ", " + mySelf.age + ", " + mySelf.city;
```
## Display Object Properties in a Loop
Object properties can also be collected in a loop.
Here, we use an expression(`x`):
```js
// Create an Object
const mySelf = {
name: "Joe",
age: 21,
city: "Bangalore"
};
// Build a Text
let text = "";
for (let x in mySelf) {
text += mySelf[x] + " ";
};
// Display the Text
document.getElementById("demo").innerHTML = text;
```
**NOTE:** You must use `mySelf[x]` in the loop.
`mySelf.x` will not work since `x` is the loop variable.
## Display using Object.values()
`Objects.values()` creates an array from the properties' values.
```js
const mySelf = {
name: "Joe",
age: 21,
city: "Bangalore"
};
// create array
const myArray = Object.values(mySelf);
// display the array using DOM
document.getElementById("demo").innerHTML = myArray;
```
## Display using Object.entries()
Using `Object.entries()` simplifies using objects in loops:
```js
const pens = {Blue:20, Black:22, Red:10};
let text = "";
for (let [pen, value] of Object.entries(pens)) {
text += fruit + ": " + value + "<br>";
}
//display the count of each pen color
document.getElementById("demo").innerHTML = text;
```
## Display using JSON.stringify()
You can convert JavaScript objects into a string with the JSON method `JSON.stringify()`
This method is included in JavaScript and is also supported in most browsers, at least all the major ones.
```js
const mySelf = {
name: "Joe",
age: 21,
city: "Bengaluru"
};
// use JSON.stringify() method
let myString = JSON.stringify(mySelf);
// display the output
document.getElementByID("demo").innerHTML = myString;
```
This method will output a string a string written in JSON notation:
```js
{"name":"Joe", "age":21, "city":"Bangalore"}
```
___
In summary, effectively displaying JavaScript objects in the DOM is crucial for dynamic web development.
By utilizing techniques such as accessing properties by name, iterating through properties, or using `Object.values()` and `Object.entries()`, you can present object data meaningfully.
Understanding these methods enhances user experience and showcases JavaScript's versatility in web applications. Mastering these techniques empowers you as a developer to create engaging and dynamic web content.
**Have fun learning 😉**
This is a cross-post from my original [blog site](https://thejoernal.netlify.app/posts/display-js-objects/). | thejoernal |
1,853,894 | Domain-Driven Design - Part 2 - Tactical Design | Introduction In the first part of the series we talked about the Strategic Design. We saw... | 27,480 | 2024-05-29T11:21:17 | https://dev.to/axeldlv/domain-driven-design-part-2-tactical-design-243n | ddd, architecture, productivity, learning | ## Introduction
In the first part of the series we talked about the Strategic Design. We saw the subdomain types, ubiquitous language, bounded context and integration patterns.
Now, we need to deep dive into the _Tactical Design_ which is the "technical" part of the DDD concept.
## Building blocks
1. **Value objects** - *Structural equality and Immutable*
The **value object** is a small object that represents a simple entity whose equality is not based on identity but on value. Value objects are **immutable**, meaning their state cannot be changed after they are created.
2. **Entities** - *Identifier equality and mutable*
Entities are objects that have a distinct identity that runs through time and different states.
Unlike value objects, entities are **mutable** and their **identity** (i.e: an identifier in UUID) is more important than their attributes.
3. **Aggregates Root** - *Type of entity and mutable*
An **aggregate** is a _type of entity_ (act as a transactional boundary) that has an explicit identifier (i.e: an identifier in UUID) and whose state is expected to change over its lifecycle. The purpose of the aggregate pattern is to ensure the consistency of the data within the aggregate as one atomic transaction.
4. **Domain services** - *Stateless service*
A domain service is an object devoid of state that encapsulates the business logic. Typically, it orchestrates interactions between different components within the system to execute calculations or analyses.
5. **Domain events** - *Event occurs in the business domain*
A domain event is an event that describes a change in a business domain.
```json
{
"eventType": "FlightScheduled",
"eventPayload": {
"FlightID": "AA1234",
"Origin": "JFK",
"Destination": "LAX",
"DepartureTime": "2024-06-01T08:00:00Z",
"ArrivalTime": "2024-06-01T11:00:00Z",
"Status": "Scheduled"
},
"eventTimestamp": "2024-05-21T10:00:00Z"
}
```
## Architectures
There are various ways to implement application architecture.
Below, you will find two types of application architecture that you can use.
First one is the `Layered Architecture` and the second one the `Hexagonal Architecture`.
### Layered Architecture

**User Interface / Presentation Layer**
It is the `interface for interactions` with customers (e.g : Web UI, CLI, REST API).
**Application / Service Layer**
The service layer functions as `a bridge between the presentation layer and the business logic` layer of the application.
**Model / Domain / Business Logic Layer**
This layer is responsible for `implementing and encapsulating the application's business logic`. It is where business decisions are executed.
**Infrastructure / Data Access Layer**
This layer integrates the `Persistence Systems`. In modern systems, it can be used for databases (DynamoDB, RDS, etc.), message buses (SQS, SNS, etc.), or object storage (S3, Azure Blob Storage, etc.).
### Hexagonal Architecture

**Business Model / Domain**
This is the heart of the application, encompassing all the business rules and logic. It operates independently and does not depend on any external systems.
**Application**
`Ports` act as gateways for communication, functioning as either inbound or outbound ports.
- `Inbound ports` : Like a service interface, exposes the core logic to the outside world.
- `Outbound ports` : Like a repository interface, enables communication from the application to a persistence system.
**Infrastructure**
`Adapters` function as implementations of ports, handling user input and converting it into language-specific commands. They contain the logic for interacting with external systems like message queues and databases, facilitating communication between external entities and the core logic.
Adapters come in two types:
- `Primary Adapters` or `driving adapters`: These drive the application using its `inbound port` (e.g: User interface adapter or REST controllers adapter.
- `Secondary Adapters` or `driven adapters`: These implement `outbound ports` and are driven by the application (e.g: message queues (SQS adapter), databases (DynamoDB adapter), and external API calls).
## Conclusion
Through the tactical lens, we delved into the building blocks of DDD: value objects, entities, aggregates, domain services, and domain events. Each component plays a specific role, from representing immutable values and maintaining entity identity to encapsulating business logic in services and signaling domain changes through events.
Moreover, we explored two principal architectural styles that facilitate the implementation of DDD: the Layered Architecture and Hexagonal Architecture. The Layered Architecture organizes code into distinct layers that separate responsibilities from the user interface down to infrastructure. In contrast, Hexagonal Architecture focuses on decoupling the core logic from external influences, emphasizing portability and adaptability through ports and adapters.
## Thank for reading
If you have any questions, feedback, or suggestions, please feel free to leave them in the comments below. I'm eager to hear from you and respond to your thoughts!
| axeldlv |
1,846,046 | Domain-Driven Design - Part 1 - Strategic Design | Introduction Domain-Driven Design (DDD) is a well known concept of software design... | 27,480 | 2024-05-29T11:20:47 | https://dev.to/axeldlv/domain-driven-design-part-1-strategic-design-30b2 | ddd, architecture, productivity, learning | ## Introduction
`Domain-Driven Design` (DDD) is a well known concept of software design approach based on domain models explaining in the book *Domain-Driven Design: Tackling Complexity in the Heart of Software* in 2003 by Eric Evans and *Implementing Domain-Driven Design* in 2013 by Vaughn Vernon.
We are starting with the "theorical part" of the DDD concept, the _Strategic Design_.
## Type of Subdomains
In DDD, a Business domain (the main area of focus for a company) typically comprises three types of subdomains:
1. **Core subdomains** - *Specific/unique from other company*
These are specific and unique to a company, often involving new technologies or services that differentiate it from competitors. For example, AWS's core subdomains include e-commerce, cloud computing, and artificial intelligence.
2. **Generic subdomains** - *Performing in the same way as other company*
These are areas where using existing solutions (off-the-shelf software) saves time compared to developing custom solutions. Examples include AWS IAM (Identity and Access Management) and online retail platforms.
3. **Supporting subdomains** - *It does not provide any competitive impact*
These subdomains do not directly impact competitiveness but support the overall business domain. Examples include CRUD operations (Create, Read, Update, Delete) and ETL (Extract, Transform, Load) processes.
## Ubiquitous Language
Communication between business stakeholders and developers can be challenging due to differing perspectives.
DDD introduces an Ubiquitous Language to facilitate efficient communication and active participation of domain experts.
This language can be documented using various methods such as wikis or word documents, ensuring clarity and consistency in communication.

## Bounded Context
A Bounded Context divides the ubiquitous language into smaller, explicit contexts. For example, in a business model, contexts could include "shopping" and "accounting."
While these contexts can evolve independently, they must integrate with each other, often through contracts.
## Patterns of Integration
### Cooperation
* **Partnership** :
The integration between bounded contexts is coordinated in an ad hoc manner. The communication is two-way, teams cooperate during the work.
Often, this process involves a collaboration between two software companies, or a software company and a hardware company, to integrate their respective products or services, thereby improving the experience for the end user.

* **Shared Kernel** :
This integration is used when two or more bounded contexts have to communicate with each other via a shared model (e.g: Shared JARs or Database schema).
Each context can modify the share model and take effect on other bounded contexts.

### Customer-Supplier
It is a upstream-downstream relationship where the *supplier* is the *upstream* and the *customer* the *downstream*.
* **Conformist** :
The client (downstream) has to be conform from what the supplier (upstream) sent to it. There is not translation of models.

* **Anticorruption layer**
It is the other side of conformist, the client (downstream) translates the data (bounded context's model) into a model tailored to it is own context/domain.
The ACL acts as a translation layer, ensuring that the two systems can communicate without corrupting each other's design and functionality.
The legacy system uses an old relational database model and communicates via SOAP web services, while your system uses a modern RESTful API architecture.

* **Open Host Service**
The supplier (upstream) is intended to expose a protocol convenient (translate the data before send the message) using a public language for the consumers (downstream).
We can use web services or micro-services defined services using a API for your service to expose an interface openly accessible.

### Separate Ways
Integration between contexts does not involve collaboration between teams.

## Go further
The connections between the bounded contexts can be visualized on a context map. This tool provides understanding of the system's overarching structure, communication flow, and potential organizational challenges.
Now that you've gained insight into techniques of domain-driven design for examining and modeling business domains, we'll pivot our focus from strategic considerations to tactical approaches that you can find here.
## Thank for reading
If you have any questions, feedback, or suggestions, please feel free to leave them in the comments below. I'm eager to hear from you and respond to your thoughts! | axeldlv |
1,868,900 | Innovatief en Functioneel: Het Belang van Balie Ontwerp in 2024 | In een tijdperk waarin gebruikerservaring centraal staat, is het ontwerp van de ontvangstbalie van... | 0 | 2024-05-29T11:19:34 | https://dev.to/nicolasberry/innovatief-en-functioneel-het-belang-van-balie-ontwerp-in-2024-416d | balie | In een tijdperk waarin gebruikerservaring centraal staat, is het ontwerp van de ontvangstbalie van cruciaal belang voor bedrijven en organisaties. Een innovatief en functioneel ontwerp van de **[balie](https://peackinterior.com/pages/design-je-eigen-balie)** kan niet alleen de eerste indruk van bezoekers verbeteren, maar ook de efficiëntie van het personeel verhogen. In dit artikel zullen we de waarde van een goed doordacht balieontwerp onderzoeken en enkele innovatieve benaderingen verkennen die in 2024 de norm zijn geworden.
**Het Belang van een Innovatief Balie Ontwerp **
Een balie is vaak het eerste fysieke contactpunt tussen een bedrijf en zijn bezoekers. Het is daarom essentieel dat dit contactpunt niet alleen esthetisch aantrekkelijk is, maar ook functioneel en efficiënt werkt. Een goed ontworpen balie kan helpen bij het creëren van een positieve eerste indruk, het stroomlijnen van processen en het verbeteren van de algehele gebruikerservaring.
**Efficiëntie en Stroomlijning**
In 2024 draait efficiëntie niet alleen om snelheid, maar ook om het vermogen om processen naadloos te integreren en te stroomlijnen. Moderne balieontwerpen omvatten vaak geavanceerde technologieën zoals zelfservicekiosken, digitale wachtrijen en geautomatiseerde check-ins. Deze technologieën verminderen wachttijden, minimaliseren menselijke fouten en verbeteren de algehele efficiëntie van de frontoffice-operaties.
**Personalisatie en Beleving**
Een ander belangrijk aspect van een innovatief balieontwerp is de mogelijkheid om de ervaring te personaliseren en te verbeteren. Door gebruik te maken van gegevensgestuurde inzichten en klantgerichte benaderingen kunnen bedrijven gepersonaliseerde diensten aanbieden en de tevredenheid van bezoekers vergroten. Dit kan variëren van gepersonaliseerde begroetingen tot het aanbieden van op maat gemaakte productaanbevelingen op basis van eerdere aankopen.
**Conclusie**
In 2024 is het ontwerp van de ontvangstbalie geëvolueerd van een eenvoudige administratieve ruimte naar een strategisch element voor bedrijven en organisaties. Een innovatief en functioneel balieontwerp is niet alleen esthetisch aantrekkelijk, maar ook essentieel voor het creëren van positieve gebruikerservaringen en het optimaliseren van operationele efficiëntie. Door te investeren in geavanceerde technologieën en klantgerichte benaderingen, kunnen bedrijven een concurrentievoordeel behalen en zich onderscheiden in een steeds veranderend zakelijk landschap.
| nicolasberry |
1,868,899 | Free Invoice Generator | Some Information About Our Free Invoice Generator An invoice is a written statement that lists the... | 0 | 2024-05-29T11:18:29 | https://dev.to/invoice_generator_b818104/free-invoice-generator-1kn1 | invoice, generator | Some Information About Our Free [Invoice Generator](https://invoicegeneratoronline.com/invoice-generate.php)
An invoice is a written statement that lists the goods or services a company has rendered to a client or customer together with the total amount that is owing. Businesses frequently send invoices to clients in order to obtain payment for the goods or services rendered.
An invoice often contains information like the names and addresses of the seller and the customer, a description of the goods or services provided, the cost of each good or service, any taxes that may be involved, shipping and handling costs, the conditions of payment, and the total sum that is owed.
For businesses to keep correct financial records and guarantee they get paid for the goods or services given, invoices are crucial. Additionally, they assist clients in keeping tabs on their purchases and payments and can serve as evidence of payment if necessary.
Invoice Legal Requirements
Due to their role as a record of business transactions between a seller and a buyer, invoices are a crucial component of the documentation process for organisations. Depending on the nation, state, or area where the transaction takes place, there may be different legal requirements for invoices, but generally speaking, the following information should be contained in an invoice:
Date of issuance: The day the invoice was actually produced.
Unique invoice number: A sequential number used to identify an invoice specifically.
Business information: The seller's (or the company sending the invoice) name, address, and phone number.
Customer information: The buyer's name, address, and contact details.
Product or service description: An explanation of the offered products or services.
Amount of products or services: The quantity of goods or services that are offered.
Price per unit of goods or services: The cost of the items or services being offered.
Total amount owed: The full sum owed for the products or services rendered.
Terms of payment: The conditions of payment, such as the deadline and any relevant discounts or penalties.
Tax details: Any relevant taxes or charges, such as value-added tax or sales tax.
Payment instructions: Details about how to pay, like bank account details or available payment methods.
It is significant to note that legal specifications for invoices may differ by jurisdiction, therefore firms are advised to speak with a lawyer or tax specialist to ensure compliance with relevant laws and regulations | invoice_generator_b818104 |
1,868,898 | Exploring Essential Tooling Types for Web Development.🚀 | 1. Linters and Formatters Purpose and Usage: Detecting Potential Bugs: Linters, such as... | 0 | 2024-05-29T11:16:59 | https://dev.to/dharamgfx/exploring-essential-tooling-types-for-web-development-4ddd | tooling, webdev, javascript, beginners | ##
### 1. Linters and Formatters
**Purpose and Usage:**
- **Detecting Potential Bugs:** Linters, such as ESLint, analyze your code to find potential errors and bugs. They highlight issues like undeclared variables or syntax errors, making debugging easier.
- *Example:* Running ESLint on a JavaScript file can detect missing semicolons or incorrect variable names.
- **Coding Standards Automation:** Formatters like Prettier enforce consistent coding styles across your project. They automatically adjust the formatting of your code, ensuring it adheres to predefined rules.
- *Example:* Prettier can reformat your entire JavaScript file to ensure consistent indentation and line spacing.
**Workflow Integration:**
- **Code Editor Extension:** Install ESLint or Prettier extensions in your code editor (e.g., VSCode) to receive real-time feedback and automatic formatting.
- **Test Runner Integration:** Incorporate linting and formatting checks into your test runner (e.g., Jest) to catch issues before they reach production.
- **CI/CD Pipelines:** Add linters and formatters to your Continuous Integration (CI) pipeline to enforce coding standards automatically.
### 2. Package Managers
**Purpose and Usage:**
- **Managing Dependencies:** Tools like npm and yarn help manage project dependencies, ensuring all required libraries are installed and up to date.
- *Example:* Use `npm install react` to add React to your project.
- **Finding and Installing Packages:** Access registries like npm’s repository to find packages that solve specific problems and easily install them.
- *Example:* Search for `npm install lodash` to add utility functions to your project.
**Automation with Scripts:**
- **Deployment and Testing:** Use npm scripts to automate repetitive tasks, such as building your project or running tests.
- *Example:* Add a script `"test": "jest"` in `package.json` to run Jest tests with `npm run test`.
### 3. Build Tools/Bundlers
**Purpose and Usage:**
- **Managing Dependencies:** Build tools like Rollup.js, Parcel, and Vite.js manage and bundle your project’s dependencies into a single or few optimized files.
- *Example:* Use Vite.js for a fast development server and optimized build for production.
- **Optimization and Minification:** These tools optimize and minify your code, reducing file size and improving load times.
- *Example:* Rollup.js can tree-shake and bundle multiple modules into a single minified file.
### 4. Deployment Tools
**Purpose and Usage:**
- **Deploying Applications:** Tools like Simple FTP clients, Netlify, and Vercel simplify the deployment of your web applications.
- *Example:* Deploy a static site to Netlify with drag-and-drop simplicity or through Git integration.
- **Automated Deployments:** Integrate with GitHub to trigger deployments automatically on code push, ensuring your application is always up to date.
- *Example:* Configure Vercel to deploy your Next.js application automatically whenever you push changes to GitHub.
**Advanced Features:**
- **Custom Domains and Serverless Functions:** Handle custom domains, integrate serverless functions, and manage form submissions effortlessly.
- *Example:* Use Netlify Functions to create serverless API endpoints.
- **Access Control and Identity Integration:** Integrate with identity providers like Okta and Auth0 for secure access control.
- *Example:* Use Auth0 to manage user authentication and authorization in your application.
### Conclusion
Understanding and utilizing these common tooling types is crucial for modern web development. Linters and formatters ensure code quality, package managers simplify dependency management, build tools optimize your code, and deployment tools streamline the launch process. By mastering these tools, you can enhance your development workflow and deliver high-quality web applications efficiently. | dharamgfx |
1,868,791 | How Spread Operator is Scamming you | Hi guys, all of us are familiar with the spread operator that is denoted by '…', and it is most... | 0 | 2024-05-29T11:16:08 | https://dev.to/masteing_the_code/how-spread-operator-is-scamming-you-4ko1 | typescript, learning, beginners, javascript | Hi guys, all of us are familiar with the spread operator that is denoted by '`…`', and it is most commonly used for copying array or objects. But did you know that it is scamming us in the name of copy.
Let me make things clear...
Try to guess the output of this code :
```tsx
const obj = {
name: 'John',
};
const copyObj = { ...obj };
copyObj.name = 'Doe';
console.log(obj);
console.log(copyObj);
```
Output :
```
{ name: 'John' }
{ name: 'Doe' }
```
Great⭐! Now how about this one :
```tsx
const obj = {
name: 'John',
dob: {
year: 2000
}
};
const copyObj = { ...obj };
copyObj.name = 'Doe';
copyObj.dob.year = 2024
console.log(obj)
console.log(copyObj)
```
Output :
```
{ name: 'John', dob: { year: 2024 } }
{ name: 'Doe', dob: { year: 2024 } }
```

Wait…. now what happened here? Why did the first object also have its value changed?
## Explaining the scam
The reason for this is the `shallow copy` that spread operator performs while copying an object.
When we refer to `shallow copy`, it means that the values of the first level properties are copied perfectly and modifying them does not affect the original object. Meanwhile the spread operator only copies the references of other properties and modifying them, changes the original object.
## Over to you
This is just one of the property of spread operator, there is more to learn about it and ways to avoid shallow copy when not required. So go out there and find your answers.
So, that's all, guys!😉
Just keep learning!💪 | masteing_the_code |
1,868,957 | A new adventure awaits | I want to start with a heartfelt thank you to all of the community members both inside and outside... | 0 | 2024-06-04T13:59:35 | https://jasonstcyr.com/2024/05/29/a-new-adventure-awaits/ | devrel, sitecore, career, community | ---
title: A new adventure awaits
published: true
date: 2024-05-29 11:15:00 UTC
tags: DevRel,Sitecore,career,community
canonical_url: https://jasonstcyr.com/2024/05/29/a-new-adventure-awaits/
cover_image: https://jasonstcyr.com/wp-content/uploads/2024/01/sugconeu-2018.jpg?w=1024
---
I want to start with a **heartfelt thank you** to all of the community members both inside and outside Sitecore who have been on this journey with me. It was just over 7 years ago that I left the Sitecore partner agency life and joined Sitecore with the goal of driving forward Sitecore's engagement with technical audiences around the world. At the time, I was already a huge fan of the community and over my tenure at Sitecore this wonderful community has become an extension of my chosen family. Every day I was was given the chance to see these amazing people connect with each other and help each other out. I have so many memories that I will get to cherish for the years to come, whether it was on #SitecoreLunch getting the latest news on Eurovision and Windows95man, or meeting up at events for the hallway chatter and taking 45 minutes to make it down the hall, or those wonderful user groups in Montreal for Sitecore trivia night, or the virtual developer days we got to join together for, or the discussions on forums and Slack about pretty much everything, or trying to beat Nick Allen at a dunking contest in Orlando (**SPOILER:** I didn't! 🤣). I put of lot of myself into this role... it was much more than just a job that paid the bills.
It's also not often that you find yourself as part of an amazing team of people with an aligned vision to do what's right and help as many people as possible. I am so very proud of the great work that our team accomplished during my years at Sitecore. We all had different ideas and passions, but we had each other's backs and we drove each other to be better. It was with this group that I learned how to build and connect a remote team that was distributed across the globe. I learned about listening more, and sometimes, I actually talked less. Those folks who had the pleasure of regular calls with me know that was a tremendous challenge! 😄
Some of the work I was most proud of was the long-term initiative with [our YouTube channel](https://www.youtube.com/c/DiscoverSitecore) that grew to almost **9,000 subscribers**! We released new content weekly for over 6 years, and really used that as a way to reach our community and help folks get excited about what was happening. The massive [Developer Portal](https://developers.sitecore.com/) project that we undertook was probably one of the most visible impacts we made, eventually creating a great landing place for our audience that improved the developer experience and became a central place for developers to land and find what they needed, including the new SaaS Changelog that we delivered via that portal. There were so many other accomplishments over the years, too many to name, but I'm very proud of how my team took charge and jumped into whatever was most important.
Last week, I emailed my team with the news that I had been impacted as part of a larger restructuring. I was filled with a lot of raw emotions, as I wasn't just losing team members, these were my friends. My team will forever have a special place in my heart. I want to give a special shout out to Pieter Brinkman who had a vision and hired the people who could make that vision a reality. I hope we did you proud!
> **Side note:** Two amazing people from my team, [Dylan Young](https://www.linkedin.com/posts/dylanyoung_opentowork-activity-7199034609001463808-JGQU/) and [Thomas Desmond](https://www.linkedin.com/posts/thomas-desmond_opentowork-opentowork-developerrelations-activity-7199096543923089408-8pYQ/), are great communicators and engineers who are also now looking for their next role. They would be great additions to your team!
Last week I also shared the news with a few folks in the community and I was blown away by the reaction! So many people reached out to me privately with messages of support and thanks. It has been a privilege and an honour to have been able to be a part of this community and connect with so many of you. No matter what happens as I look for my next opportunity, you are a part of who I am now and I will always be cheering on this community. Thank you for all these years together!
One thing I know is that my time spent with the community was never spent unwisely. I have always believed that for us all to succeed we need to connect at a very human level by being **helpful**, **kind**, and **honest**. Working to build those connections with you all, at scale, around the world, has been one of the most rewarding accomplishments of my life. Was there some self-serving nature to it? Absolutely! I believe that Sitecore's success is strongly tied to the experience with not just the product, but also the experience around the company itself. Whether that meant tutorial videos, written articles, webinars, events, newsletters, real-world examples running in production... we found many ways to try to find where there was a friction point and make it smoother. I know there is always more that could have been done, but we fought every day to work on what was the most important way to help at that moment.
I'm not sure what lies ahead on the road, whether it will be in the Sitecore space or something altogether new. So rather than say goodbye, I'll quote the famed children's entertainer, Mister Rogers, who once sang:
> And when you wake up, ready to say
> I think I'll make a snappy new day.
>
> It's such a good feeling, a very good feeling
> The feeling you know that I'll be back
> When the day is new.
> And I'll have more ideas for you.
>
> And you'll have things you'll want to talk about
> I. Will. Too.
[](https://jasonstcyr.com/wp-content/uploads/2019/11/avatar_500x500.png)
**Until next time, neighbours!** | jasonstcyr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.