text_chunk
stringlengths
202
3k
of Management, 14. Ali, M.M.M. and Deshmukh, A.A., 2023. A STUDY ON CONSUMER PERCEPTION TOWARDS ADOPTION OF E -VEHICLE IN SANGLI CITY. The Online Journal of Distance Education and e -Learning, 11(2). Khalid, A.M. and Khuman, Y.S.C., 2022. Electric Vehicles as a Means to Sustainable Consumption: Improving Adoption and Perception in India. In Socially Responsible Consumption and Marketing in Practice: Collection of Case Studies (pp. 325 -345). Singap ore: Springer Nature Singapore. Anastasiadou, K. and Gavanas, N., 2022. State -of-the-Art Review of the Key Factors Affecting Electric Vehicle Adoption by Consumers. Energies, 15(24), p.9409. Devaraj, A., Ambi Ramakrishnan, G., Nair, G.S., Srinivasan, K.K., Bhat, C.R., Pinjari, A.R., Ramadurai, G. and Pendyala, R.M., 2020. Joint model of application -based ride hailing adoption, intensity of use, and intermediate public transport considerat ion among workers in Chennai City. Transportation Research Record, 2674(4), pp.152 -164. Vol-08 Issue 0 7, July -2024 ISSN: 2456 -9348 Impact Factor: 7.936 International Journal of Engineering Technology Research & Management Published By: IJETRM ( ) Patil, M. and Majumdar, B.B., 2021. Prioritizing key attributes influencing electric two -wheeler usage: a multi criteria decision making (MCDM) approach –A case study of Hyderabad, India. Case Studies on Transport Policy, 9(2), pp.913 -929. Ali, I. and Naushad, M., 2022. A study to investigate what tempts consumers to adopt electric vehicles. World Electric Vehicle Journal, 13(2), p.26. Priye, S. and Manoj, M., 2020. Passengers’ perceptions of safety in paratransit in the context of three -wheeled electric rickshaws in urban India. Safety Science , 124, p.104591. MR, B.G., 2021. A STUDY ON “CONSUMER AWARENESS AND PERCEPTION TOWARDS ELECTRIC VEHICLES IN BENGALURU CITY”. Malhotra, K., 2022. Consumer Buying Behavior and Brand Choice in Sport Utility Vehicle (SUV) Segment: A Literature Review. IUP Journal of Brand Management, 19(1), pp.40 -52. Jindel, J., Chandra, A., Allirani, H. and Verma, A., 2022. Studying two -wheeler usage in the context of sustainable and resilient urban mobility policies in India. Transportation research record, 2676(6), pp.424 -436. Bansal, P., Kumar, R.R., Raj, A., Dubey, S. and Graham, D.J., 2021. Willingness to pay and attitudinal preferences of Indian consumers for electric vehicles. Energy Economics, 100, p.105340. Bansal, P., Kumar, R.R., Raj, A., Dubey, S. and Graham, D.J., 2021. Willingness to pay and attitudinal preferences of Indian consumers for electric vehicles. Energy Economics, 100, p.105340. Patel, A.R., Tesoriere, G. and Campisi, T., 2022, July. Users’ socio -economic factors to choose electromobility for future smart cities. In International Conference on Computational Science and Its Applications (pp. 331 -344). Cham: Springer Internatio nal Publishing. Chakraborty, R. and Chakravarty, S., 2023. Factors affecting acceptance of electric two -wheelers in India: a discrete choice
Applications (pp. 331 -344). Cham: Springer Internatio nal Publishing. Chakraborty, R. and Chakravarty, S., 2023. Factors affecting acceptance of electric two -wheelers in India: a discrete choice survey. Transport policy, 132, pp.27 -41. Vasudevan, V., Agarwala, R. and Dash, S., 2021. Is vehicle ownership in urban india influenced by the availability of high-quality dedicated public transit systems?. IATSS Research , 45(3), pp.286 -292. Benedini, D.J., Lavieri, P.S. and Strambi, O., 2020. Understanding the use of private and shared bicycles in large emerging cities: The case of Sao Paulo, Brazil. Case Studies on Transport Policy, 8(2), pp.564 -575. Das, P.K. and Bhat, M.Y., 2022. Global electric vehicle adoption: implementation and policy implications for India. Environmental Science and Pollution Research, 29(27), pp.40612 -40622.
MOTOR INSURANCE 7.1 MOTOR OWN DAMAGE INSURANCE With new liberalization policies encouraging FII (Foreign Institutional Investment), Automobile giants all over the world started establishing their base in the Indian Market with companies like Hyundai, Ford etc. flooding the market with technologically advanced new models of vehicles. This boom in the automobile industry and the growing consumerism saw a fourfold increase in the premium income from the motor insurance for all the insurers in India. With the flourishing of Automobile Industry, Motor Insurance has become a lucrative business but requires careful underwriting as the number of accidents has increased due to explosion of vehicle population, bad roads, rash, negligent driving and poor maintenan ce of vehicles. On the other hand theft of vehicles has also increased disproportionately. Did you know…. NANO is the first indigenous private car with rear engine. 7.1.1 BASIC PRINCIPLES The following basic principles are applicable for Motor Insurance Contracts ( Refer to section 1.0.1 ) : o Insurable Interest o Indemnity o Utmost Good Faith o Subrogation o Proximate Cause o Contribution Did you know…. Motor Insurance in India cannot be transacted outside the purview of the Erstwhile IMT unless specifically authorized 7.1.2 SUBJECT MATTER Any motor vehicle, construction vehicle, plant and machineries on wheels, special purpose vehicle, self powered or driven or being pulled, for private use or public use, irrespective of number of wheels fitted, types of fuel used (Petrol, Diesel, CNG, LPG even electric or battery fed). 7.1.3 CLASSIFICATION OF VEHICLE There are different categories of vehicles plying on the road in accordance with the provisions of the Motor Vehicle Act. Motor vehicles : Any mechanically propelled vehicle used upon roads and includes a chassis to which body is not attached and trailer but does not include vehicle run or fixed rails or specially adopted for use within the factory premises.  Private car : Private car is type of a vehicle used for social, domestic, pleasure and professional purpose and not for carriage of goods (other than samples) excluding use of vehicle for hire or reward, organized racing, pace making reliability trial and speed testing and use for an y purpose in connection with motor trade.  Two Wheeler: Motorcycle is a mechanically self -propelled two -wheeler with gear or without gear but a kick starter vehicle is treated as Geared vehicle for Insurance Rating.  Scooter: It is a mechanically propell ed two -wheeler with variable gears.  Auto cycles: Pedal cycle mechanically assisted by a motor engine upto 75 cc. Capacity.  Commercial Vehicle: i. Good carrying vehicle (private carriers ): The owner of the transport vehicle who uses the vehicles only f or carriage of goods, which are his properties, or carriage of goods, which are necessary for the purpose of his business. ii. Good carrying vehicle (public carriers ): The owner of the transport
f or carriage of goods, which are his properties, or carriage of goods, which are necessary for the purpose of his business. ii. Good carrying vehicle (public carriers ): The owner of the transport vehicle who uses the vehicles only for carriage of goods, wh ich are not his properties, or carriage of goods, which are necessary for the purpose of his business. iii. Public service vehicle: A motor vehicle used for carrying passenger and includes maxi cab, motor cab, contract carriage and stage carriage. iv: Maxi cab: means any motor vehicle constructed or adapted to carry more than six passengers, but not more than twelve passengers, excluding the driver, for hire or reward ; v. Motor cab: Motor vehicle used to carry not more than 6 persons excluding driver for hire or reward. vi. Contract carriage: Motor vehicle which carry passengers for hire or reward under a contract and the vehicle used as whole for an agreed sum either on time basis or point to point bas is. vii. Stage carriage: A motor vehicle which can carry more than 6 passengers excluding driver for hire or reward with separate fares paid by individual passenger for the whole journey or for stages of the journey. viii. Miscellaneous Type of Vehicle : A ll other vehicles, which do not fall under any of the categories enlisted above, are classified under this category. Examples are: Ambulance, Agricultural Tractor and Trailer, Road Rollers, Excavators etc. Did you know…. Semi -Articulated Vehicle is a combination of a tractor and trailer connected through a coupling for universal maneuvering. This combined arrangement can be passenger carrying or goods carrying depending on body type. This t railer cannot be separately taken under Class B while underwrit ing. Both GVWs together have to be taken for goods carrying type and in no situation this can be taken under miscellaneous class . Dumpers and Tippers have been transferred from the category of Miscellaneous Types of Vehicles to th e category of Goods Carrying Vehicles by IRDA vide order Dt. 29.03.2012. Tractors other than Agricultural Tractors like electric trolleys or tractors, traction engine tracto rs, trolleys and goods carrying tractors have also been reclassified as Goods Carrying Vehicles by the above cited order. Agricultural Tractors continues to remain in the category of miscellaneous and Special Types of Vehicles. 7.1.4 TYPES OF INSURANCE POLICIES Liability Only Policy: It is the minimum cover required under the motor vehicles act and provides compensation for death andor property damage to third parties out of use of motor vehicle in public place for which the Insured is liable to pay. The extent of liability is as per the Motor vehicle Act. This also includes Compulsory PA cover for owner -driver  Package Policy: An Insurance policy which covers Accidental Damage to the vehicle involved in an accident along with or in addition to the third party liability.  Liability only with Fire andor theft only of the vehicle to be
policy which covers Accidental Damage to the vehicle involved in an accident along with or in addition to the third party liability.  Liability only with Fire andor theft only of the vehicle to be insured in addition to third party liability. This decision is taken by the underwriter after considering the various factors such as make, model of the vehicle, declinature of Insurance by previous insurers, past claims experience etc. Note: This cover is prohibited for vehicles covered under class D of the IMT.  Fire and/or theft only: This cover is given if the vehicle to be insured is laid up1 in the garage or if it remains unused. Note: This cover is prohibited for vehicles covered under class D,E,F and G of the IMT.  Motor Trade Policies: Motor Trade policies are designed to for Motor vehicle Manufacturer, dealer and repairer who deal with Motor vehicles that remain in their custody as part of their trade. Proposers must have their own trade plates by Registered Transp ort Authority. This policy takes care of damage to the vehicle, death & bodily injury to Third Party. This insurance is not like other motor insurance policy given to the registered owner of the vehicle.  Internal Road Risk Policy: This policy is issued t o manufacturer or dealers. This policy takes care of transport risk during the period of transit from one place to another. Usually the vehicles involved are un -registered and uninsured under Normal Motor policy 7.1.5 SCOPE OF MOTOR INSURANCE Underwriters and insured mutually agree to the scope of the contract and other terms and conditions such as Insured perils Conditions to the contract to be observed by the insured and the insurer during the currency of the policy. The value for which insurance is done. Period of the contract of insurance Procedure to be followed in case of material alterations. Rate of premium compatible with the risk covered. Right of the insurers (1) Laid up vehicle is one, which is laid up in garage and not in use for a period of 2 consecutive months or more and not left for repairs due to accident. Concession is provided for such vehicles provided period of suspension should not extend beyond 12 months from the original expiry date of the p olicy. The laid -up period will be counted from the date of surrender of Certificate of Insurance. Duties of the insured General exclusions (These exclusions cannot be deleted the breach of which will render the contract void ab-initio ) Specific excep tions, which are outside the scope of the contract 7.1.6 INSURED PERILS FireSelf Ignition or Lightning Burglary, housebreaking or theft Riot and Strike Earthquake (Fire and shock damage) Flood, typhoon, hurricane, storm, tempest, inundation cyclone, hailstorm, frost Accidental external means Malicious Act Terrorist activity Whilst in transit by railinland water wayRock slide Accidental external means the happening of something unexpected or unforeseen and it excludes loss
means Malicious Act Terrorist activity Whilst in transit by railinland water wayRock slide Accidental external means the happening of something unexpected or unforeseen and it excludes loss arising from natural causes within. The word external refers to outwardly visible. It means that what is not internal. Example : Loss or damage to the car due to overheating is not covered. Self-Ignition: It appears to include the damage or loss caused by the internal defect of the car, which is the direct cause for fire. The term malicious damage is intended to include loss arising to the malicious act of a th ird party and not the act of the insured. If it results from the insured, the act becomes willful. Accessories: Accessories are those items, which are not necessary for running of the vehicle, but which the vehicle is required to carry with it under motor vehicles Act. However there can be accessories fitted not as a mandatory requirement as per M V Act. Example : Rear view mirror, crash guard ElectricalElectronic Items refers for insurance purpose items that are fitted to the v ehicle in addition to those that is provided by the manufacturer of the vehicle including accessories. With regard to the details of perils for different type of vehicles, the student may refer to the annexure and comparative charts. 7.1.7 CANCELLATION & TRANSFER Cancellation of policy:  At the option of the insurer: With 7 days notice by registered letter to the insured at his last mentioned address. The insured is entitled to refund of premium for unexpired period and the insurer retains the premium for expired period proportionately.  At the option of the insured: With 7 days notice and the insured is entitled or refund of premium on the number of days unexpired and the insurer will retain the premium for the period in which the risk was in force mo re than proportionately on short period basis provided no claim has been preferred by the insured. Refund of premium is subject to retention of minimum premium as per norms. No Cancellation is allowed if the ownership of the vehicle is transferred to the new owner unless the evidence of Policy for the vehicle is produced. Transfer of policy in the event of the death of the Insured: The policy will lapse after 3 months from the date of death of the insured or until the expiry of the policy whichever is earlier. a. During the said period the legal heirs can get the policy transferred subject to their application with i. Death certificate of the insured and legal heir ship certificate ii. Proof of title to the motor vehicle iii. Copy of the policy a. Insurance company reserves its rights to abide by any order of the court, with regard to declaration about the legal heirs and ownership of the vehicle and the nominee will not have any right to the order of the court. Transfer of Policy in case of change of Ownership : The policy benefits stand to accrue to the buyer of the vehicle once sale consideration is paid and suitable
any right to the order of the court. Transfer of Policy in case of change of Ownership : The policy benefits stand to accrue to the buyer of the vehicle once sale consideration is paid and suitable endorsements made in the certificate of registration provided the transfer of insurance from the original owner to the new owner ought to be done within 14 days of sale, as per Motor Vehicle Act, if not done the accidental benefit to the damage or loss of the vehicle is forfeited on the 15th day itself but the Act is generou s towards third party liability because it is considered to be deemed transferred. 7.1.8 PREMIUM AND RATING Various Factors that determine the quantum of premium a. Value of the vehicle b. Additional accessories c. Extra fittings like electronic and non electronic item d. Type of vehicle e. Age of vehicleseating capacity/gross vehicle weight h. Perils covered i. Combination of risks like comprehensive cover, third party and fire or theft or fire and theft. j. Past claims experience Annual Premium: As motor policies are annual policies, the premium consideration is collected for 12 months . It is not permissible to insure for more than one year under motor insurance. Pro-rata Premium: Under some circumstances, depending on provisions made available in the tariff, premium is charged in proportion to the number of days for which the risk has been in force. Such premium is known as Pro -rata Pre mium. Situations where pro rata premium is charged i. Due to the change of ownership of the vehicle, the insurance gets transferred to the new owner. This may happen during the currency of the policy period and the new owner may like to have the extension of policy period so that he gets an insurance policy for note more than complete 12 months. The insured can get such extension of policy with a suitable premium for additional period of insurance without letting the insured to have a revised policy for a period more than 12 months. ii. Some insured desire to revise their policy period to coincide with the financial year or assessment year iii. When the insured desires to enhance the value of vehicle during the currency of the policy in order to cope with the market value. iv. Any additional extra items like electronic or non electronic items subsequently fitted in the vehicle can be added to the value of the vehicle insured during the currency of the policy with suitable additional premium v. Sometimes insured may desire to report the extraneous perils like earthquake, flood, riot & strike during the currency of the policy which he had originally opted out by enjoying reduced premium. Short Period Premium: There are occasions where the insured needs insu rance for a period less than 12 months . Such facility is allowed but the insured has to pay the premium on short period basis. The premium for short period is slightly higher than the regular premium -rating factor. It means policy for short period is more expensive than normal annual
pay the premium on short period basis. The premium for short period is slightly higher than the regular premium -rating factor. It means policy for short period is more expensive than normal annual policies. Situations under which short period premium is collected i. When the policy is issued for a period less than 12 months ii. When the policy is cancelled at the request of the insured. Premium Rebates: The insurer recognizes the merit of claim free clients and the premium for renewal period is reduced by way of bonus. The bonus is rewarded on own damage premium for the value of the vehicle only and not on premium for third party liability. Tables of no claim bonus are provided in the tariff for different category of vehicles. This discount goes with the insured and not with the vehicle i.e., if the vehicle is sold, the new owner is not eligible for the no -claim bonus. However, the previous owner can subs titute the discount for any new vehicle, which he may purchase during three years from the date of transfer. In case if the vehicle is sold to spouse or children or parents, the discount passes on to such persons. Similarly, if a vehicle is used or operat ed by an employee for an institution and the same is transferred to him at a later date, he can avail the no claim discount. For persons coming from abroad, discount can be allowed provided he produces a letter to that effect that he is eligible for the di scount, within three years from the expiry of the overseas policy. In case of renewals, the no -claim discount can be granted to the insured only if he renews his policy within 90 days . Vehicles used in Own Premises and Confined Sites: A reduction in premiu m is allowed if the vehicle is not licensed for road use and used in own premises where public have no access to. Similar discount is allowed for goods carrying vehicle, which need not be registered, and which are used in confined sites where public has no access. Did you know…. The minimum premium applicable for vehicles specially designed or modified for use of the blind, han dicapped and mentally challenged persons will be Rs.25 - Vehicles Specially Designed for Handicapped Persons: A Discount in premium is allowed for vehicles, which are specially designed for and used by handicapped persons and institutions engaged exclusively in the service for handicappe d and mentally retarded, as per the provision of the MV act. Automobile Association Membership : If the insured is member of a recognized automobile association, a discount of 5% shall be granted subject to a maximum of Rs.50- for Private cars. Voluntary Excess Discount: Some insured desire to avoid preferring insurance claims to the extent, which can be borne by them within their financial limits. The premium is reduced based on the quantum chosen by the insured as per tariff . Concession for Laid -Up Vehicle: If a vehicle is laid up in garage and is not put to use for a continuous period of more than 2 months, the liability of
quantum chosen by the insured as per tariff . Concession for Laid -Up Vehicle: If a vehicle is laid up in garage and is not put to use for a continuous period of more than 2 months, the liability of the insurers under the liability risk section of the policy is suspended for such period and a concess ion is given to the insured. The concession is given in two forms and the insured can chose whichever he wants. a. Pro-rata refund of premium for such period. This refund is granted in the form of credit and not as cash i.e., such refund can be adjusted against the premium for subsequent renewal. b. The policy period can be extended after the expiry of the policy for a period equal to the period of such lay up. Under Accidental Damage section —The cover is suspended for the period during which the vehicl e is laid up in garage and not in use and a. Restricted cover for fire and/or theft is granted for the period of lay up and a refund of premium on pro rata basis is made after charging a premium for the restricted cover. Again the refund is on credit basi s and not cash. b. As an alternative, the insured can extend the policy period after the expiry of the policy for a period equal to the period of lay up. A notice in writing must be given to the insurers regarding the lay up and the certificate of insuran ce must be surrendered. Such lay up of vehicle must not be meant for repairing the vehicle. The period of suspension of cover shall not extend beyond 12 months from the expiry date of the policy. Public place according to Section 2(24) of MV Act means “ a road, street, way or other place, whether a thoroughfare or not, to which the public have a right of access and includes any place or stand at which passengers are picked up or set down by a stage carriage ” 7.1.9 EXCLUSIONS Geographical Area: If the v ehicle sustains damages or the vehicle is lost and if any liability is incurred, that should have been only due to an accident that takes place within India and in an area within jurisdiction of permit in case of commercial vehicles. Contractual liabilit y is excluded No insurance claim is payable if i. The insured violates the condition of limitations as to use ii. If the vehicle is driven by any person other than the driver whose name if any is specified in the policy The insurers will pay only for the resultant damages or less in consequent to the accident and not for consequential loss that may arise due to the non usage of the vehicle, like i. Rent for alternate car ii. Loss of earning whilst the vehicle is in the gar age for accidental repairs. No liability arising directly and indirectly or contributed by ionizing radiations, or contamination by radioactivity from any nuclear fuel or nuclear waste from the combustion of nuclear fuel. Damage caused by nuclear weapons material is not admissible. No claim due to war, warlike operations The premium must be calculated in accordance with the premium computation tables appearing in the
caused by nuclear weapons material is not admissible. No claim due to war, warlike operations The premium must be calculated in accordance with the premium computation tables appearing in the tariff separately for different types of vehicles. Rate of premium is different for accidental damages to the insured’s own vehicle and liability risk to third party. The insured cannot choose to pay premium only for accidental damages and he has to necessarily take third party liability with accidental damage to vehicle; whereas, the risk of third party liability can be separately taken and premium paid. Premium payable on a policy is based on the value for which insurance is sought and must be calculated in accordance with premium computation tables appearing in the tariff. Terminati on of contract: A contract of insurance can be terminated on the following circumstances. a. At the option of the insurer b. At the option of the insured c. Double insurance If it comes to the knowledge of the insurer or the insured finds that there are two co existing policies for the same vehicle for the same period, the one which was taken first remains and the next policy gets cancelled and the premium is refunded by retaining a nominal amount towards administrative and document expenses. Retentio n of minimum premium is necessary in the event of cancellation to take care of administrative expenses. 7.1.10 CLAIM SETTLEMENT METHODS There are two types of losses a. Partial loss: When vehicle sustains damages in an accident and the insured incurs the expenditure in order to repair the damaged parts of the vehicle in addition to the towing charges to the repairer shop which is less than the insured value of the vehicle under the policy, the loss or damages fall under the partial loss. i. Accidental damage to the vehicle ii. Theft repairing the damaged p arts, Cost of removal from the Accident spot to the repairers workshop etc. b. Total loss: There is a total loss when the insured vehicle is stolen by somebody or the vehicle is so damaged that it cannot be repaired without incurring expenditure more than the sum insured or the vehicle is so damaged that the damaged value of the vehicle be as of no value, such losses fall under Total loss. The insurance company practices different modes of claims settlement depending upon the nature of claim, extent of repa irs and the market value of the vehicle on the date of accident. There are different modes of settlement of Total Losses as detailed below: Repair basis: The surveyor ascertains the total internal and external physical damages to the vehicle and identif ies the nature of damages, cause of accident and then determines the extent of damages. Once the surveyor is satisfied with the genuineness of the claim taking into account the cause of accident, the perils insured, he arrives at the cost of repairs, cos t of replacement of parts and the salvage value. He then discusses and negotiates with the repairer to arrive at a
the cause of accident, the perils insured, he arrives at the cost of repairs, cos t of replacement of parts and the salvage value. He then discusses and negotiates with the repairer to arrive at a consensus and authorizes the repairers to carry out the repair work relevant to the accident. Under this repair basis, the insured should b ear a portion of the repair cost for depreciation which is based on the age of the vehicle finding place in the policy. The surveyor suggests the settlement of claim on repair basis only when he is satisfied that the quantum involved in economical in compa rison with that of market value and sum insured whichever is less. The insured is required to submit the relevant bills for cost of labour, the cost of parts and the cost of removal from the spot of accident to the repairer’s workshop. On submission of b ills and surrendering of salvages to the insurer the claim will be processed and settled. The settlement of claim under repair basis fall under partial loss as the repair liability of the insurer less than the value insured. Total Loss Basis or Total Loss Net of Salvage Basis: Under many circumstances, the insurance company may opt to make over the damaged vehicle if the claim on repair liability found to be on the higher side, uneconomical as compared to the market value under this basis. In fact, if gross repair cost exceeds 75%of IDV then only CTL (Constructive Total Loss) can be considered. The insurer may have to incur additional expenditure like garage charges; cost of disposal in the form of advertisement, auction charges andprivate cars Registration certificate Driving license (except for parked vehicles and theft or Burglary of the vehicle in parked condition) Taxation book Commercial Vehicles  Registration Certificate  Driving license  Taxation book  Fitness certificate  Permit  Authorisation for National Permit.  Trip sheet  Weigh slipnylon - to 1000 - to 2000- from 50or TP Premium. No discount or special rating is provided for in the IMT for classic cars.  Classic cars are those manufactured after 31.12.1940 but before 31.12.1970 and certified by Vintage and Classic Club  Depreciation is no t applicable to arrive at IDV for brand new vehicles covered under Motor Trade policy  Compulsory PA cover for owner -driver is to be taken for all the vehicles under ownership.  Policies issued to cover imported vehicles belonging to Embassies, High Comm issioner or Consulates and for such other Diplomatic Missions where import duty element is not included in the IDV, premium under Section -I should be loaded by 30%.  Where the vehicle is fitted with only CNGLPG kit is not separately available 5% extra is to be charged on OD premium.  Certificate of Insurance for a motor vehicle is to be issued only in FORM 51 in terms of Rule 141 of Central Motor Vehicles Rules 1989.  Nil Depreciation cover - Differe nt Insurers have different Add -On Covers as per material filed by them
is to be issued only in FORM 51 in terms of Rule 141 of Central Motor Vehicles Rules 1989.  Nil Depreciation cover - Differe nt Insurers have different Add -On Covers as per material filed by them under File and Use guidelines but with Nil Depreciation Add-On Cover, no depreciation is applicable in case of replacement of parts in case of a partial loss. However rate of dep reciation applicable for arriving at IDV will continue to be applicable.  Return to InvoiceCTL. Actual terms and conditions vary from insurers to insurers. Additional expenses like registration costs, Tax paid etc may also be reimbursed under this Add -On Cover.  Long Term Motor Two Wheeler Insurance Policy: - NIC has launched after IRDA approval of standalone motor third party insurance policy for Two wheelers for a period of 2 and 3 years subject to various conditions such as insurers will not be able to cancel the standalone TP cover except in TL, premium shall not be revised upwards or downwards during the period of policy etc. Since the approval and basic terms and conditions are as per IRDA directive, these are common for all insurers in India. MOTOR THIRD PARTY INSURANCE Motor Third Party Insurance (TP PolicyAct Policy) is issued under the provisions of Motor Vehicles Act, 1988. It is compulsory under law. It is designed to protect the interest of third parties. When a motor vehicle is in use in a p ublic place, when running or stationery, it can accidentally cause harm to others. Members of public i.e. pedestrians, passengers in bus, people travelling in the opposite vehicle, cyclists, employees engaged in the commercial vehicle etc. may be injured o r killed in accident. Property belonging to third party may be damaged. The object of motor third party insurance is to cover the risk of vehicle owner who is likely to incur liability for payment of compensation to third party. Motor TP Insurance is diff erent from other branches of insurance. It covers statutory liability which is unlimited, whereas other branches of insurance covers contractual liability limited to the sum insured. Financier has an interest in the other branches whereas no such term is t here in third party policy. Motor vehicles belonging to Central and State Government, any Local Authority, any State Transport Undertakings are exempted from the provision of compulsory insurance provided under Section 146(3) of Motor Vehicles Acts 1988, p rovided any such authority has to establish and maintain a fund to meet the liability arising out of the use of any vehicle belonging to such authority. Motor TP Policies are governed by Motor Vehicles Act, WC Act, Legal Services Authority Act, Courts, Lok Adalat etc. The terms ’Tort’, ’Negligence’, ’in course of employment’, ’Vicarious liability’ are relevant for the purpose of dealing third party claims. Deathproperty damage of third party is caused due to the fault of the driver. The vehicle owne r being the master becomes vicariously liable for the fault
the purpose of dealing third party claims. Deathproperty damage of third party is caused due to the fault of the driver. The vehicle owne r being the master becomes vicariously liable for the fault committed by the servant (driver) under the law of Tort. Similarly employer is liable for the damage caused to employees connected to the vehicle in course of employment. 7.2.1 RELEVANT SECT IONS OF MV ACT, 1988 Section 133 - Duty of owner to give all information relating to accident to police officer on demand. Section 134 —Duty of driver or other person in -charge of the vehicle to take all reasonable steps to secure medical attention for the injured person. Section 146 —Compulsory insurance against third party ri
Heliyon 10 (2024) e27252Available online 29 February 20242405-8440/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC license( ).Research article A new improved randomized response model with application to compulsory motor insurance Ahmad M. Aboalkhaira,b,**, A.M. Elshehaweyc, Mohammad A. Zayeda,b,* aDepartment of Quantitative Methods, College of Business Administration, King Faisal University, Al-Ahsa, 31982, Saudi Arabia bDepartment of Applied Statistics and Insurance, Faculty of Commerce, Mansoura University, Mansoura, 35516, Egypt cDepartment of Applied, Mathematical & Actuarial Statistics, Faculty of Commerce, Damietta University, New Damietta, 34519, Egypt ARTICLE INFO Keywords: Sensitive attributes Non-sampling error Randomized response technique Privacy measure Third-party liability insurance Insurance inclusion ABSTRACT One of the challenges, when investigating sensitive attributes, or information that people tend not to disclose, through surveys is the ethical obligation to preserve the privacy of respondents. Although the randomized response method, originally suggested by Warner, allows estimating the proportion of such attributes within the population while maintaining confidentiality, the vari-ance of the estimate consistently increases if the likelihood of selecting the question about sen-sitive attribute increases. The purpose of this research is to introduce a new three-stage RR model which provides an efficient alternative to Warner ’s model that allows more credibility from a practical perspective and apply the model to estimate noncompliance ratio in compulsory motor insurance. For the two models, a measure of privacy protection was calculated and a relation between this measure and the efficiency of both models was introduced. Efficiency comparisons indicate that the proposed model can always be made more efficient than both Warner ’s and Mangat & Singh ’s RR models. The proposed model, with specific parameter selection, was applied on a selected population and proved a practical reliability. The noncompliance ratio to obtain compulsory motor insurance was estimated by both a point and a confidence interval. This es-timate provides a basis to predict third party motor insurance inclusion. 1.Introduction Surveys are widely used by researchers as a principal technique to assess attitudes and behaviors in various situations and in many areas. When survey questions deal with sensitive or embarrassing issues, (such as, the use of drugs, psychiatric conditions, cheating behavior, fraud in insurance, …etc.), then research and practical difficulties arise, mainly, due to the ethical obligation of the re-searchers to preserve the privacy of their respondents. Other issues of concern may also appear, among which is the non-sampling error bias resulting from “response effects ” such as refusing to respond or untruthful reporting. Before the randomized response technique (RRT) was
may also appear, among which is the non-sampling error bias resulting from “response effects ” such as refusing to respond or untruthful reporting. Before the randomized response technique (RRT) was introduced, little progress has been made towards the solutions of these issues. Randomized response (RR) is a method used in surveys in which the goal is to reduce or eliminate response errors when re-spondents are queried about illegal behaviors, sensitive or highly personal matters. A randomized response design indirectly acquires information from respondents by using a probabilistic random device through which the respondent selects a question from two or *Corresponding author. Department of Quantitative Methods, College of Business, King Faisal University, Al-Ahsa, 31982, Saudi Arabia. **Corresponding author. Department of Quantitative Methods, College of Business, King Faisal University, Al-Ahsa, 31982, Saudi Arabia. E-mail addresses: aaboalkhair@kfu.edu.sa (A.M. Aboalkhair), a-elshehawey@du.edu.eg (A.M. Elshehawey), mzayed@kfu.edu.sa (M.A. Zayed). Contents lists available at ScienceDirect Heliyon u{�~zkw! s{yo|kr o>!ÐÐÐ1moww1m{ y2sowtÞ{z! Received 12 April 2023; Received in revised form 22 February 2024; Accepted 27 February 2024 Heliyon 10 (2024) e272522more questions, one of them, at least, is sensitive. This happens without revealing to the interviewer which question has been chosen. Since the types of responses are the same for each question, no respondent (or answer) can be classified with certainty a posteriori with respect to the sensitive characteristic. This leads to that, when using RR, it is assumed that the information received from respondents are truthful and sufficient for estimation purposes. In addition, for any given probability distribution used in the design, it is possible to compute unbiased estimates of the parameters, such as the proportion, associated with the sensitive attribute. Therefore, RRT enables the researcher to assess attitudes and behaviors of populations which direct questions often fail to assess. Randomized response (RR) was first proposed by Warner in 1965 . He built his method on the premise that respondents’ cooperation should be better if the questions allow answers that reveal less, even to the interviewer. The method is essentially based on a random device that makes the interviewee responds with answers that furnish information only on a probability basis. Although Warner’s method allows collecting responses about sensitive issues while maintaining confidentiality, the estimate of the proportion of the population with a sensitive attribute has additional variance due to the random device. Since Warner’s proposal, the randomized response technique has been extended to a number of directions by several authors who focused on reducing the variance of the estimate and improving the efficiency of the model, whether by suggesting parameter selection according to specific criteria that ensures minimizing
who focused on reducing the variance of the estimate and improving the efficiency of the model, whether by suggesting parameter selection according to specific criteria that ensures minimizing the variance, or through using different estimation methods, or, mostly, via suggesting a design modification to the original model of Warner. In general, the efficiency of the randomized response estimate depends on all parameters involved in this estimate. However, choosing certain values for the parameters involved in the estimate could result in a minimum variance for the estimator, these same values could make respondents more suspicious and therefore the bias rising from incomplete or untruthful answers will dominate the mean square error, which in turn decrease the efficiency of the estimate. Several authors have suggested that the efficiency of randomized response technique can be improved by methods based on alternative estimation procedures. Few researchers suggested using information about auxiliary characteristics (Covariates) to improve RRT estimate precision . Other researchers have dealt with the idea of adopting a Bayesian approach for the RRT . Some developments suggested a stratified sampling to improve the estimate of the sensitive attribute . The main approach, adopted by most research, for increasing the efficiency of RRT is based on design modification. Different modifications of Warner’s RR were developed by various authors, just to name a few, Greenberg et al. ; Moors ; Raghavarao ; Mangat and Singh ; Kuk ; Mangat ; Singh et al. ; Bhargava and Singh ; Singh et al. ; Haung ; Chang et al. ; Gupta et al. ; Gjestvang and Singh ; Perri ; Abdelfatah and Mazloum ; Batool et al. ; Singh and Gorey ; Tarray and Singh ; Narjis and Shabbir ; Singh and Suman . Singh et al. proposed a model in which three randomizing devices are given to the respondents, the first two devices carrying the same two statements: (i) I belong to sensitive group A1 and (ii) Go to next randomization device, and the third device is the same as in the model suggested by Singh et al. . The latter was a modified version of Greenberg’s unrelated question model using a randomizing device that carries three statements: (i) I belong to sensitive group A1; (ii) I belong to non-sensitive group A2 and (iii) Report “No”. In this paper, a new model is considered, that differs from the Singh et al. procedure in the sense that the third randomization device used in the proposed procedure is same as Warner’s model . As a suggested application of the proposed RR model, the case of obligatory motor liability insurance is examined. This latter type of insurance is looked at as an ongoing concern, especially in regions where insurance culture and awareness are still developing. In 2009, a report by The World Bank shed some light on this issue in developing countries and emphasizes the importance of this type of insurance for road safety, personal responsibility, and safe transport systems
by The World Bank shed some light on this issue in developing countries and emphasizes the importance of this type of insurance for road safety, personal responsibility, and safe transport systems in these countries. The report raises an issue of awareness pointing that car owners tend to think of motor insurance as a type of tax they can freely avoid rather than as a safeguard against personal liability, a concept that is unfamiliar to the public . The lack of awareness and negative perceptions of auto liability insurance motor liability insurance among drivers has an impact, not only on seeking insurance coverage, but also on insurance claims . The sufficiency of obligatory motor liability insurance premiums (contributions), which is a direct reflection of car owners’ compliance to the law, is a significant factor that affects the functionality of the insurance system as a whole . In general, insurance rates and loss ratios are widely affected by the sufficiency or insufficiency of premiums, and rates can periodically increase due to this factor. Examples of such scenarios have been discussed by some authors, considering rate changes due to this along with other factors . 2.Materials and methods 2.1. The original Warner’s model Warner developed a method for estimating the proportion of persons with a sensitive attribute, A, (π), without requiring the individual respondent to report his actual classification, whether it be A or not-A to the interviewer. The respondent is provided with a randomizing device (such as, a spinner) in order to choose one of two statements of the form: (a) - I belong to sensitive group A (selected with probability p1) (b) - I do not belong to sensitive group A (selected with probability q1) A.M. Aboalkhair et al. Heliyon 10 (2024) e272523Without revealing to the interviewer which statement has been chosen, the respondent answers “yes” or “no” according to the statement selected and to his actual status with respect to the attribute A. With a random sample with replacement of n respondents, the maximum likelihood estimate of π as given in Warner ’s design with appropriate change of notations is: }πˆ‰}αq1Љ12q1Š1Cq1ℑˆ0B5 (1) where }α is the observed proportion of “yes” answers, }αˆn′En, n′:the number of “yes” answer in the sample. If all respondents answer the selected statement truthfully, the resulting estimate will be an unbiased estimate with variance given by: V }π†ˆπ 1π†\n‡q1‰1q1Љ12q1Š2\n (2) It is clear from Eq. (2) that the variance of π estimator, according to Warner ’s model, is greater than the typical variance of the proportion estimate, and that this variance is negatively correlated with the value of pˆ1q (the second fraction in Eq. (2)). Warner has established that, the variance of π decreases as †p0B5†increases, however, choosing big values for p leads to the loss of the advantages of confidentiality and that the bias rising from incomplete or untruthful answers dominates the mean square error. That is
however, choosing big values for p leads to the loss of the advantages of confidentiality and that the bias rising from incomplete or untruthful answers dominates the mean square error. That is because the efficiency of the randomized response estimate depends on the psychological reaction of the respondents to the particular randomized response design that persuades them to cooperate or not to cooperate. This was the motivation behind suggesting the modified RR model presented in this work, that allows increasing the probability of selecting the sensitive question without making respondents more suspicious which increases untruthful responses, and, therefore, the variance of the estimate. This is done by increasing the efficiency of Warner ’s model by proposing a random multi-stage tool that allows increasing the value of p to a reasonable extent without significantly affecting respondents ’ confidence in the tool and their truthfulness. 2.2. Proposed RR model The proposed model is based on a three-stage random tool that is distributed to the selected sample as shown in Fig. 1. In the first stage (S1), each respondent randomly chooses one of two alternatives: the first is a yesno questions), exactly the same as the original Warner ’s model. At the end of the process, all responses are collected, some of which are "yes" and the rest are "no". The probability of getting a "Yes" (α) can be expressed as follows (Eq. (3)): αˆp3π‡q3‰p2π‡q2‰p1π‡q1 1π†ŠŠ (3) where: π: The proportion of individuals, within the population, who belong to the sensitive group. p4s:The probability for the question that the individual belongs to the sensitive group shows up at stage s, where s ˆ1,2,3 and p4s‡q4sˆ1. The proposed estimator for the proportion of individuals, within the population, who belong to the sensitive group (√π∗) is: }π∗ˆ‰}αQЉ12QŠ1CQℑˆ0B5 (4) where }α is the proportion of ‘yes’ answer obtained from the n sampled respondents and Qˆ̃31q4s. To check the validity of Eq. (4), if Fig. 1.Proposed model flowchart. A.M. Aboalkhair et al. Heliyon 10 (2024) e272524we replace Q with q1, we can get the Warner ’s estimate as given by Eq. (1) Properties of proposed estimator. Since the random variable n}α⊃Bin nCα†, therefore }α is considered an unbiased estimator of α, and the variance of √π∗can be expressed as follows: Theorem 1. The variance of the proposed estimator V √π∗†is given by V }π∗†ˆπ 1π†\n‡Q‰1QЉ12QŠ2\n (5) Proof of Theorem 1. Using Eq. (4), the variance of }π∗is V }π∗†ˆV‰}αQЉ12QŠ1)ˆV }ᆉ12QŠ2(6) Since }α follows a binomial distribution with parameter n and α, then V }α†ˆα 1α†En (7) Substituting in Eq. (6) by Eq. (7) we can get, V √π∗†ˆα 1ᆉ12QŠ2\n (8) We can use Eq. (3) to calculate α 1α†as follow, α 1α†ˆπ 1π†‰12QŠ2‡Q‰1QŠ (9) then, it is easy to get Eq. (5) by inserting Eq. (9) in Eq. (8). □ To check the validity of Eq. (5), if we replace Q with q1 , we can get the variance of Warner ’s estimate as given by Eq. (2). Theorem 2. An unbiased estimator of V
by inserting Eq. (9) in Eq. (8). □ To check the validity of Eq. (5), if we replace Q with q1 , we can get the variance of Warner ’s estimate as given by Eq. (2). Theorem 2. An unbiased estimator of V √π∗†is }V }π∗†ˆ}α 1}ᆉ12QŠ2\ n1† (10) Proof of Theorem 2. Taking expectation on both sides of Eq. (10), the result is hold. □ Fig. 2.The difference in efficiency between Warner ’s model and the proposed model. (a) For q1ˆ0B1 (b) For q1ˆ0B2 (c) For q1ˆ0B3 (d) For q1ˆ0B4. A.M. Aboalkhair et al. Heliyon 10 (2024) e2725252.3. Efficiency comparison Here, our focus lies on exploring the specific conditions under which the efficiency of the proposed model, based on a three-stage random tool, outperforms both the original Warner ’s model , which relies on a one-stage random tool, and the well-known Mangat & Singh ’s model , which utilizes a two-stage random tool. The proposed estimator will be more efficient than the original Warner ’s estimator iff: q2q3D‰1q1Љ1q1q2q3Š1(11) The above inequality (Eq. (11)) shows that the proposed strategy can always be made more efficient than the usual Warner ’s strategy by choosing any values of q2q3 for any suitable practicable value of q1. Fig. 2 (a – d) shows the difference, in terms of efficiency, between Warner ’s model and the proposed model at practicable values of q1 and the different values of q2 and q3. Positive values are in favor of the proposed model. From Fig. 2, it may be noted that. 1 For all different values of q2 and q3, and practicable values of q1 (q1D0B5†, the proposed estimate is more efficient than Warner ’s estimate. 2 For fixed values of q2 and q3, the efficiency of the proposed estimate against Warner ’s estimate increases as q1 increases from 0.1 to 0.4. 3 For fixed values of q1 and q2, the efficiency of the proposed estimate against Warner ’s estimate increases as q3 decreases from 0.9 to 0.1 (since the variance of the proposed estimate decreases as q3 decreases from 0.9 to 0.1, but the variance of Warner ’s estimate is fixed). 4 For fixed values of q1 and q3 the efficiency of the proposed estimate against Warner ’s estimate increases as q2 decreases from 0.9 to 0.1 (since the variance of the proposed estimate decreases as q2 decreases from 0.9 to 0.1, but the variance of Warner ’s estimate is fixed. The proposed estimator will be more efficient than the original Mangat & Singh ’s estimator iff: q3D‰1q1q2Љ1q1q2q3Š1(12) The above inequality (Eq. (12)) shows that the proposed strategy can always be made more efficient than the Mangat & Singh ’s Fig. 3.The difference in efficiency between Mangat & Singh ’s model and the proposed model. (a) For q1ˆ0B1 (b) For q1ˆ0B2 (c) For q1ˆ0B3 (d) For q1ˆ0B4. A.M. Aboalkhair et al. Heliyon 10 (2024) e272526strategy by choosing any values of q3 for any suitable practicable value of q1 and q2. Fig. 3 (a – d) shows the difference, in terms of efficiency, between Mangat & Singh ’s model and the proposed model at practicable values of q1 and the different values
value of q1 and q2. Fig. 3 (a – d) shows the difference, in terms of efficiency, between Mangat & Singh ’s model and the proposed model at practicable values of q1 and the different values of q2 and q3. Positive values are in favor of the proposed model. From Fig. 3, it may be noted that. 1 For all different values of q3, and practicable values of q1 and q2, the proposed estimate is more efficient than Mangat & Singh ’s estimate. 2 For fixed values of q2 and q3, the efficiency of the proposed estimate against Mangat & Singh ’s estimate increases as q1 increases from 0.1 to 0.4. 3 For fixed values of q1 and q3, the efficiency of the proposed estimate against Mangat & Singh ’s estimate increases as q2 increases from 0.1 to 0.4. 4 For fixed values of q1 and q2, the efficiency of the proposed estimate against Mangat & Singh ’s estimate increases as q3 decreases from 0.9 to 0.1 (since the variance of the proposed estimate decreases as q3 decreases from 0.9 to 0.1, but the variance of Mangat & Singh ’s estimate is fixed). 2.4. Measure of privacy protection One of the basic characteristics of a RR model is protecting the privacy of respondents. Different measures of privacy protection for RR models have been proposed (Anderson , Lanke , Leysieffer and Warner , Zhimin and Zaizai ). Applying the latter, the measure of protection for Warner ’s model are given as MW R†ˆ 12q1†22q1 1q1†(13) and the design probabilities for the proposed RR model can be obtained as: P y†A†ˆ1Q and P y†A†ˆQ. P n†A†ˆQ and P n†A†ˆ1Q. and P A†y†ˆππ‡ 1π† Q†E 1Q†P A†n†ˆππ‡ 1π† 1Q†E Q†And, hence, the measure of privacy protection is obtained as: MP R†ˆ⃦⃦⃦⃦112⊔τ y†‡τ n†⊓⃦⃦⃦⃦where τ y†ˆ1QQCτ n†ˆQ1Q MP R†ˆ 12Q†22Q 1Q†(14) To check the validity of Eq. (14), if we replace Q with q1 , we can get the measure of protection for Warner ’s model as given by Eq. (13). The relation between the previous measure of privacy protection and the efficiency of both Warner ’s model and the proposed model can be presented as follows: V }π†ˆπ 1π† showed that, the smaller the values of their measure of privacy protection the more the privacy of respondents is being protected. A balancing act is obviously necessary. A.M. Aboalkhair et al. Heliyon 10 (2024) e2725273.Empirical study and results Saudi traffic law, Chapter 2, Article 8 , stated that “C. Each vehicle ’s owner shall insure hisher name must be stated as a driver in the insurance policy, which is not often the case. This usually happens within families and among friends in the local community. ≡The vehicle may be used for a purpose other than that mentioned in the insurance policy. For example, when people use their cars to transport others for a fee, while the purpose of using the car in the insurance policy is private not commercial use. The proposed model was applied through an experimental study in which the population was final year male undergraduate students within a college of business who 1. own or drive a car (which is very common among
model was applied through an experimental study in which the population was final year male undergraduate students within a college of business who 1. own or drive a car (which is very common among university male students) and 2. have knowledge of the basics of risk and insurance. The rationale behind this selection was that the individuals within this ‘selected ’ population are expected to have more awareness of the importance of ‘third party car insurance ’ regardless of the ‘assumed ’ compliance of buying such insurance. Moreover, according to the latest published official statistics by the Saudi Ministry of Health, males within the age group 19–30 are the most vulnerable to injury and death due to road traffic accidents . Fig. 4 (a – d) shows the classification of injuries and deaths of car accidents in Saudi Arabia in 2020 by age (Fig. 4 (a,b)) and gender (Fig. 4 (c,d)). A random sample of 40 students was selected, then, few days before the experiment, they have been notified and agreed about Fig. 4.Traffic injuries and deaths in Saudi Arabia (2020) by age group and gender. (a) Injuries by age (b) Deaths by age (c) Injuries by gender (b) Deaths by gender. A.M. Aboalkhair et al. Heliyon 10 (2024) e272528location (in the college), date and time of the experiment. At the beginning of the experiment, a short presentation was delivered explaining the whole process and emphasizing that, by design, their privacies are well-preserved. Each respondent goes through the experiment, behind a partition, without being seen by anyone else in the room and leaves the room when finished. An empty box, 100 ‘Yes’ cards, 100 ‘No’ cards, and three spinner devices (smart phones with a spinner app (Spin The Wheel) installed) have been used for the purpose. The spinner app in the first two devices was set, with q1ˆq2ˆ0B4†, to show one of two options: ≡Do you comply with the third-party insurance requirements and terms on your car at all time?; ≡Go to the next device. So, if option 1 appeared at any of the first two devices, the experiment ends, and the respondent drops a Yes or No card in the box. If the experiment extends to the third spinner device, the respondent will have to answer one of the following two questions, each having an equal chance to show up (q3ˆ0B5): ≡Do you comply with the third-party insurance requirements and terms on your car at all time?; ≡Do you NOT comply with third-party insurance requirements and terms on your car at all time? Based upon the results obtained from the sample and applying Eq. (4) the estimate of the proportion of individuals, within the selected population, who are not complying with buying obligatory third-party car insurance, √π∗, is 0.1875 and its estimated variance }V √π∗†is 0.007512 (Eq. (10)). Thus, a 95% confidence interval for √π∗is (0.018, 0.357). 4.Discussion The suggested RR model provides an efficient alternative to both Warner ’s model and Mangat & Singh ’s model that allows more credibility from a practical
for √π∗is (0.018, 0.357). 4.Discussion The suggested RR model provides an efficient alternative to both Warner ’s model and Mangat & Singh ’s model that allows more credibility from a practical perspective. Setting the values for the probabilities q1Cq2Cq3 to 0B4C0B4C0B5, respectively, seem to be a rational choice in the sense that, both the efficiency and the privacy for the proposed model are equal to those of Warner ’s model at p1ˆ0B9 and those of Mangat & Singh ’s model at p1ˆ0B8. Furthermore, this selection maintains a good balance between increasing the likelihood of sensitive question appearance and reducing the level of suspicion by respondents, and, hence, raising their coop-eration. Fig. 5 shows that the efficiency of the proposed model is better than that of both Warner ’s and Mangat & Singh ’s models at q2ˆq3ˆ0B5 for all values of q1. Also, at q2ˆq3ˆ0B5 and q1ˆ0B4 the efficiency of the proposed model corresponds to that of Warner ’s model at p1ˆ0B9 and to that of Mangat & Singh ’s model at p1ˆ0B8. As per the specific application of the suggested model, for estimating obligatory motor insurance noncompliance ratio, the resulting estimate (0.1875) may be looked at as a close-to-minimum value for the whole car owners ’ population in Saudi Arabia. This is because the selected sub-population on which the RR model was applied represents those car owners with ‘high ’ awareness of insurance. Furthermore, this estimate can be considered a basis to predict the level of inclusion for this type of insurance which is a matter of importance in the context of both social and financial sustainability. Also, it would be beneficial for insurance companies to estimate this ratio within different areas or communities from time to time to adjust insurance rates and terms accordingly. Since gross pre-miums in insurance include a loadings component, this ratio can be taken into consideration to adjust this component if necessary. In general, RR models, including the one presented in this article, can be an efficient data collection tool to investigate sensitive attributes in various areas . Fig. 5.Efficiency Comparison between Warner ’s model, Mangat & Singh model and the proposed model at selected values for q1Cq2Cq3. A.M. Aboalkhair et al. Heliyon 10 (2024) e272529Funding This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University (KFU), Saudi Arabia (GRANT5951). Ethics statement The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the deanship of the scientific research ethical committee, King Faisal University (date of approval: 23 February 2022). Data availability statement The authors declare that the data supporting the findings of this study are available within the article. The rawBF02595813. L. Barabesi, M. Marcheselli, Bayesian estimation of proportion and sensitivity level in randomized response procedures,
the findings of this study are available within the article. The rawBF02595813. L. Barabesi, M. Marcheselli, Bayesian estimation of proportion and sensitivity level in randomized response procedures, Metrika 72 (2010) 75–88, orgs00184-009-0242-7. Z. Hussain, J. Shabbir, M. Riaz, Bayesian estimation using Warner’s randomized response model through simple and mixture prior distributions, Commun. Stat. Simulat. Comput. 40 (2010) 147–164, H. Xin, J. Zhu, T.-R. Tsai, C.-Y. Hung, Hierarchical bayesian modeling and randomized response method for inferring the sensitive-nature proportion, Mathematics 9 (2021) 2518, H.H. Ki, K.Y. Jun, Y.L. Hwa, A stratified randomized response technique, Korean J. Appl. statistics 7 (1994) 141–147. J.-M. Kim, W.D. Warde, A stratified Warner’s randomized response model, J. Stat. Plann. Inference 120 (2004) 155–165, (02)00500-1. J.-M. Kim, M.E. Elam, A two-stage stratified Warner’s randomized response model using optimal allocation, Metrika 61 (2005) 1–7, s001840400319. S. Ghufran, S. Khowaja, M.J. Ahsan, Compromise allocation in multivariate stratified sample surveys under two stage randomized response model, Optim Lett 8 (2014) 343–357, A.M. Aboalkhair et al. Heliyon 10 (2024) e2725210 H.P. Singh, T.A. Tarray, A stratified tracy and osahan ’s two-stage randomized response model, Commun. Stat. Theor. Methods 45 (2016) 3126 –3137, https: doi.org03610926.2014.895839 . S. Abdelfatah, R. Mazloum, An efficient two-stage randomized response model under stratified random sampling, Math. Popul. Stud. 23 (2016) 222–238, . T. Tarray, H. Singh, A proficient two-stage stratified randomized response strategy, J. Mod. Appl. Stat. Methods 17 (2018), 1544453468 . Z. Hussain, S.A. Cheema, I. Hussain, A stratified randomized response model for sensitive characteristics using non identical trials, Commun. Stat. Theor. Methods 49 (2020) 99–115, . N. Gupta, S. Gupta, Mohd Tanwir Akhtar, Multi-choice stratified randomized response model with two-stage classification, J. Stat. Comput. Simulat. 92 (2022) 895–910, . B.G. Greenberg, A.-L.A. Abul-Ela, W.R. Simmons, D.G. Horvitz, The un
Heliyon 10 (2024) e36501Available online 18 August 20242405-8440/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license( ).Research articlePricing weekly motor insurance drivers ’ with behavioral and contextual telematics dataMontserrat Guillena,b,*, Ana M. P˘erez-Marína,b, Jens P. NielsencaDepartament d’Econometria, Estadística i Economia Aplicada, Universitat de Barcelona (UB), Av. Diagonal, 690, 08034, Barcelona, SpainbRISKcenter-Institut de Recerca en Economia Aplicada (IREA), Universitat de Barcelona (UB), Av. Diagonal, 690, 08034, Barcelona, SpaincBayes Business School. City, University of London, 106 Bunhill Row, London, EC1Y 8TZ, United KingdomARTICLE INFOKeywords:Motor insuranceNear-missTraffic accidentHighwaySpeedABSTRACTTelematics boxes integrated into vehicles are instrumental in capturing driving data encom -passing behavioral and contextual information, including speed, distance travelled by road type, and time of day. These data can be amalgamated with drivers ’ individual attributes and reported accident occurrences to their respective insurance providers. Our study analyzes a substantial sample size of 19,214 individual drivers over a span of 55 weeks, covering a cumulative distance of 181.4 million kilometers driven. Utilizing this dataset, we develop predictive models for weekly accident frequency. As anticipated based on prior research with yearly data, our findings affirm that behavioral traits, such as instances of excessive speed, and contextual data pertaining to road type and time of day significantly aid in ratemaking design. The predictive models enable the creation of driving scores and personalized warnings, presenting a potential to enhance traffic safety by alerting drivers to perilous conditions. Our discussion delves into the construction of multiplicative scores derived from Poisson regression, contrasting them with additive scores resulting from a linear probability model approach, which offer greater communicability. Furthermore, we demonstrate that the inclusion of lagged behavioral and contextual factors not only enhances prediction accuracy but also lays the foundation for a diverse range of usage-based insurance schemes for weekly payments.1.IntroductionData providers that collect telematics from vehicles in motion usually do not have access to evidence from accidents, which would easily be retrieved from insurance records. At the same time, insurers make little use of the massive amounts of material gathered by telematics boxes and they do not look at detailed telematics information as they mostly resort to driving mileage only. The dissociation between information suppliers comes together with the reluctance of insurance companies to reveal the nature of their rating factors to external parties. All in all, this has considerably slowed down research on measurable driving behavior and operating circumstances that explain a driver ’s proneness to cause a
rating factors to external parties. All in all, this has considerably slowed down research on measurable driving behavior and operating circumstances that explain a driver ’s proneness to cause a traffic accident, in spite of a massive amount of information that is known to have been recorded somewhere. Therefore, data inaccessibility and the lack of synergies are the reasons why we do not expect to see major transformations in usage-based insurance in the market in the short future. We do, however, find pay-as-you-drive schemes being *Corresponding author. Departament d’Econometria, Estadística i Economia Aplicada, Universitat de Barcelona (UB), Av. Diagonal, 690, 08034, Barcelona, Spain.E-mail addresses: mguillen@ub.edu (M. Guillen), amperez@ub.edu (A.M. P˘erez-Marín), jens.nielsen.1@city.ac.uk (J.P. Nielsen). Contents lists available at ScienceDirectHeliyonu{�~zkw! s{yo|kr o>!ÐÐÐ1moww1m{ y2sowtÞ{zReceived 7 December 2023; Received in revised form 9 July 2024; Accepted 16 August 2024 Heliyon 10 (2024) e365012commercialized all over the world. Under those systems, drivers pay a constant fee plus some cost per mile. Typically, these schemes do not consider where or when the driving distance has been driven or how drivers are effectively managing their vehicles.Our contribution aims to bridge the gap of the literature on the analytic methods to assess the role of behavioral and contextual driving data in predicting accident expected frequency on a weekly basis, something that has a direct implication on the expansion of usage-based insurance. Our method proposes an algorithm to estimate risky driving scores that can be used to provide feedback to drivers about their performance on the wheel or to construct insurance tariffs based not only on how many miles are driven, but also on when and where distance is driven and how a driver is operating their car. Contrary to the existing literature, we offer a comprehensive analysis of advanced driving risk assessment in weekly periods. Our approach has the potential to be integrated in new vehicles by innovative manufacturers to provide feedback to the driver, or to serve as the basis for insurance ratemaking that includes behavioral and contextual driving data. Certainly, one may assert formally that the consideration of weekly ratemaking in motor insurance be-comes particularly intriguing in the context of carsharing vehicles or rental cars, wherein drivers undergo frequent and dynamic changes. The fluid nature of driver rotations in such scenarios necessitates a nuanced approach to ratemaking, accommodating the variability in driver profiles and usage patterns. This underscores the importance of a weekly rate structure that can effectively adapt to the evolving nature of the driver pool, ensuring a fair and economically viable insurance framework for both the service providers and the transient drivers involved.Unlike previous existing contributions we are able to disclose predictors and examine the
and economically viable insurance framework for both the service providers and the transient drivers involved.Unlike previous existing contributions we are able to disclose predictors and examine the role of timely data collection. We therefore take full advantage of the fact that telematics data provide a continuous source of information. We argue that a driving risk score can be formulated weekly and that lagged information from the previous week is informative about future accident occurrence. We provide the analysis of a unique dataset of 19,214 drivers observed over more than one year, with a total kilometer distance covered of 181.392.006 km, making this analysis the most complete existing study that can be found in the literature up to now. Our telematics data contain behavioral information on speeding events counts, that is, the number of times in a week and per type of road that a driver exceeded the maximum posted speed. Data also contain distance driven per type of road and distance driven in the night.Contrary to most analysis which focus on yearly data, our proposed methodology can be generalized to cope to time interval frequencies under one year, such as daily or monthly data, and it can also be implemented by trip, so that a driving risk score is provided after a voyage is finished, but unlike the driving score that we see in some modern cars, we are able to adapt the score to a variety of context information. For example, driving in an urban area at low speed is not necessarily a sign of precautious behavior, but it may be the consequence of heavy traffic congestion and, consequently, more risk of having an accident.We show that incorporating behavioral and contextual data about the driver’s experience improves the prediction performance of classical frequency models used in accident analysis, compared to not including this information, when considering telematics in-formation from the same period. This is a well-known fact, but we also show that lagged information, i.e. telematics data from the previous period, helps anticipating accident frequency in the subsequent period. This opens the door to establishing warning scores that can help drivers identify how their probability of suffering an accident changes along time.We analyze a unique combination of weekly information on individual drivers and their insurance provider records. Telematics boxes collect data on distance travelled, type of roads, time of day, and speeding events, which are then integrated with accident occurrence records. By amalgamating information on driving style and contextual data with traditional ratemaking factors such as gender, age, and vehicle power, we confirm a substantial enhancement in the ability to predict accident frequency. We further demonstrate how a multiplicative scheme derived from a Poisson model specification or an additive scheme resulting from a linear probability model specification may alter the existing usage-based insurance
how a multiplicative scheme derived from a Poisson model specification or an additive scheme resulting from a linear probability model specification may alter the existing usage-based insurance pricing in the current marketplace, primarily predicated on distance driven, irrespective of the manner and location of driving.Through our contribution, we emphasize that accident frequency can be anticipated by considering when, where, and how a driver behaves behind the wheel. Previous research has offered only partial solutions, either due to the lack of merging accident data with telematics data or the non-disclosure of factors influencing accident frequency by insurance companies. Our contribution provides a comprehensive perspective, introducing new insights that can significantly enhance usage-based insurance schemes and payment models based on weekly data, incorporating contextual information beyond behavioral patterns.2.BackgroundThe literature on usage-based insurance is extensive and it has been intensively developed in last twenty years. Eling and Kraft conducted a thorough review of numerous academic studies and industry papers spanning nearly two decades, from 2000 to 2019. These works predominantly focused on investigating the pivotal telematics variable for estimating claim frequency: distance driven. Lemaire et al. highlighted the significance of annual mileage as a potent predictor of at-fault claims. More recently, Gao et al. provided a survey of telematics driving data research in actuarial science. The authors provided a thorough description the nature of telematics driving data, received second by second, and the difficulties one faces dealing with such information.The concept of usage-based insurance (UBI) initially revolved around assessing insurance rates based on the distance covered, as elucidated in Litman’s discussion on various distance-based insurance price structures. The role of mileage and its correlation with claim frequency was examined in conjunction with other factors. Boucher et al. concluded that although total distance driven is a pertinent variable, the relationship between distance driven and accident occurrence might not be strictly linear due to the “learning effect.” Essentially, this means that individuals who drive twice as much as others with identical characteristics have fewer than twice the accident claims. Moreover, it can be argued that covering more distance might indicate superior driving skills or a propensity to use safer roads like highways, which are typically associated with long-distance trips. These roads tend to have a decreasing marginal effect on the probability of accidents occurring.M. Guillen et al. Heliyon 10 (2024) e365013Boucher et al. employed a generalized additive model approach to scrutinize the impact of both distance driven and the duration of insurance contracts on claim frequency. Surprisingly, they discovered that neither distance nor contract duration exhibited a linear
scrutinize the impact of both distance driven and the duration of insurance contracts on claim frequency. Surprisingly, they discovered that neither distance nor contract duration exhibited a linear relationship with claim frequency. Adding to this, Guillen et al. incorporated yearly distance travelled as an offset within a zero-inflated Poisson model to account for excess zeros in claim frequency counts. They also noted a non-linear effect in their dataset regarding this variable. More recently, Boucher and Turcotte utilized GAMLSS (Generalized Additive Models for Location, Scale, and Shape) and GAMs (Generalized Additive Models) with fixed effects to analyze telematics count data in a panel setting. Their findings contradicted earlier perceptions of non-linearity, suggesting that the relationship does appear to be linear. They attributed the apparent non-linearity to residual heterogeneity, effectively captured by the GAMs.Beyond just considering mileage, a plethora of evidence suggests that various telematics variables hold a strong causal link with accidents. Consequently, these variables can significantly enhance the predictive accuracy of frequency models utilized in automobile insurance. For instance, Verbelen et al. (2018) contend that telematics data empowers the tailoring of automobile insurance pricing based on policyholders ’ driving behavior. They devised a statistical modeling approach for claim frequency using telematics variables and demonstrated that such variables bolster the model ’s predictive capability. Consequently, gender as a discriminating rating variable becomes obsolete. A similar finding was echoed by Ayuso et al. . In a more recent study, Ayuso et al. con-structed a frequency model adaptable to updates with telemetric data. Their research affirmed that not only the distance covered but also driver habits significantly impact the anticipated number of at-fault accident claims . This revelation underscores that the cost of insurance coverage can be personalized. Telemetry enables insurers to consider factors identified by traffic authorities as associated with risky driving, including traffic violations. So et al. delved into the integration of telematics data into a classification model to ascertain driver heterogeneity, utilizing data gleaned from a Canadian telematics program. Their investigation revealed that evalu -ating driving behavior is markedly enhanced when employing telematics in comparison to traditional risk factors.In this context, identifying telematics variables with significant predictive power for accident frequency is pivotal. Modern tele-matics technologies in car insurance generate vast amounts of data, obtained from high-frequency GPS location data (measured per second) from individual car drivers and trips, leading to the proliferation of big data in the insurance industry. Paefgen et al. noted the complexity and data volume associated with usage-based insurance pricing, emphasizing its
and trips, leading to the proliferation of big data in the insurance industry. Paefgen et al. noted the complexity and data volume associated with usage-based insurance pricing, emphasizing its challenge in actuarial decision-making. They analyzed real raw location data, considering 15 predictor variables, and compared logistic regression, neural network, and decision tree classifiers. Their study demonstrated that while neural networks exhibited superior classification perfor -mance, logistic regression was more favorable from an actuarial perspective due to its ease of interpretation and direct effect quan-tification. Their results highlighted the potential of high-resolution exposure data in simplifying usage-based insurance pricing. Baecke and Bocca explored risk assessment models integrating driving behavior data using three distinct data mining techniques. They concluded that including standard telematics variables significantly enhanced customer risk assessment, enabling insurers to tailor their products to individual risk profiles. The study also emphasized the importance of incorporating easily interpretable data mining techniques mandated by regulators before advancing to more complex predictive models. Moreover, they demonstrated that telematics-based insurance products could be swiftly implemented, requiring only three months of data for reliable risk estimations.Huang and Meng utilized logistic regression and four machine learning techniques as risk probability models and Poisson regression as a claim frequency model. They established tariff classes with substantial predictive effects, proposing a pricing frame -work that improved both interpretability and predictive accuracy. Their empirical results reaffirmed the considerable potential of driving behavior variables in automobile insurance. Pesantez et al. also highlighted logistic regression as a suitable model for predicting claim frequency using telematics information, given its interpretability and good predictive capacity. Despite implementing modern machine learning modeling approaches, they observed that XGBoost necessitated extensive model-tuning procedures to match logistic regression ’s predictive performance and required more effort for interpretation.In the realm of machine learning, numerous contributions focused on driving pattern recognition, which can be leveraged to determine accident safety scores and enhance insurance pricing. Weidner et al. identified maneuver patterns, trips, trip segments, and the total insurance period as significant indicators of individual driving behavior. Wüthrich utilized high-frequency GPS location data and innovative algorithms to classify distinct driving styles, demonstrating their applicability in regression analysis for car insurance pricing. Gao and Wüthrich introduced speed and acceleration heatmaps, categorized using the K-means algorithm to differentiate varying driving styles. Gao et al. further explored telematics covariates
Gao and Wüthrich introduced speed and acceleration heatmaps, categorized using the K-means algorithm to differentiate varying driving styles. Gao et al. further explored telematics covariates extracted from car driving data, affirming their superior predictive power for claim frequencies compared to traditional pricing factors like driver ’s age. Gao and Wüthrich extracted feature information from high-frequency GPS location data, utilizing it to allocate individual car driving trips to specific drivers. Geyer et al. defined a driving factor based on overall distance driven, the number of car rides, and speeding, identifying a significant impact of speed driving factor on risk. Meng et al. calculated risk scores using a supervised driving risk scoring neural network model, demonstrating improved prediction performance for claim frequency when incorporating these risk scores.Arumugan and Bhargavi conducted a survey on driving behavior in usage based insurance using big data. They proposed a solution that finds the risk posed by aggressive driving and road rage incidents by considering the behavioral and emotional factors of a driver. Ziakopoulos et al. . claimed that telematics pricing entails crash reductions of 20% –43 % and harsh event reductions of 10% –52 % are reported. However, they also noted that telematics-based research might have biases stemming from data availability. The usefulness of telematics-supported driver behaviour analysis is addressed by Ziakopoulos et al. and Siami et al. .P˘erez-Marín and Guillen investigated telematics information for risk quantification and safety in vehicles with speed control capabilities, emphasizing the potential to reduce accident claims by addressing excess speed. Guillen et al. identified relevant risk factors to streamline telematics information necessary for risk classification, introducing the concept of near miss events in usage-based insurance, i.e recorded risky events such as brakingcornering or smart phone use that are positively correlated with accident occurrence. Their analysis revealed that near-miss events, even if no accident is recorded, offer valuable M. Guillen et al. Heliyon 10 (2024) e365014insights for dynamic risk monitoring through telematics. Recently, Alrassy et al. investigated driver behavior obtained from large-scale telematics data and its relationship with crash data. They found that hard braking is more indicative of higher collision rates on highways, and hard acceleration is a stronger risk indicator on non-highways urban roads. Guillen et al. integrated telematics data in UBI pricing schemes that penalize near miss occurrence. In their analysis, the authors compensate the lack of claims during the period when telematics information was collected with past claim history of insureds. This is a common limitation in actuarial research dealing with telematics data, specifically that the accident history does not match with telematics data collection period. Similarly,
of insureds. This is a common limitation in actuarial research dealing with telematics data, specifically that the accident history does not match with telematics data collection period. Similarly, Moosavi and Ramnath investigated driver ’s styles and also used past-at fault traffic accidents and citations as risk indicators of clusters of drivers with similar driving behavior. Masello et al. found that the driving context has significant power in predicting driving risk.Tesla presented its Predicted Collision Frequency (PCF) formulas, shedding light on risk score components like forward collision warnings, hard braking, aggressive turning, unsafe following, and forced autopilot disengagement (see Ref. ). This transparency contributes to the ongoing discussion on model opacity and showcases the relevance of driving behavior variables in assessing risk. Several car manufacturers have introduced similar safety score systems, emphasizing acceleration, braking, cornering behavior, and distance driven as key metrics to calculate driving performance scores.Regarding the effectiveness of telematics-based feedback in improving driving behavior, Li et al. remark that post-trip in-terventions have a limited effect if they are not part of a risk mitigation strategy able to improve long-term behavior. In that sense, the authors proposed to provide a personalized feedback and realistic and actionable suggestions for policyholders. Malekpour et al. found that only providing feedback has a minuscule impact in reducing speeding behavior, and financial incentives are necessary. Similarly, Meuleners et al. concluded that personalized feedback does not seem to produce a significant change in overall driving scores of young drivers (they only found some improvements for specific drivers).The Appendix A1 provides a summary of telematics variables utilized in the literature for driving risk assessment, encompassing factors beyond distance travelled, such as speed, road type, time of vehicle usage, and the inclusion of near miss events. These insights collectively advance the understanding of telematics variables and their role in shaping insurance products and pricing strategies .3.MethodsOur strategy consists in predicting the expected frequency of accidents for driver i in period t, in a sample on n drivers each observed a total of Ti time periods. In our application we observe weekly data, so that Ti is the total number of observed weeks for driver i. We define the maximum observation frame, TˆmaxiTi.Our objective is to model the conditional mathematical expectation of accident frequency for driver i, in period t, denoted as Eyit), as a function of J dynamic predictors zjit, where jˆ1C…CJ, which change over time and K static predictors xki, where kˆ1C…CK, which do not change over time, including a constant intercept.Generalized linear models specify a link of the linear predictor, hxkiCzjit), and the output Eyit). A statistical distribution in the exponential
not change over time, including a constant intercept.Generalized linear models specify a link of the linear predictor, hxkiCzjit), and the output Eyit). A statistical distribution in the exponential family for the response random variable yit is also specified. Parameter estimates of the linear predictor can easily be found by likelihood maximization. Other machine learning methods are more flexible in the specification of the combinations of static and dynamic factors, but they require establishing a loss-minimization principle. Usually, Random Forest or XGBoost methods provide interesting and accurate predictive algorithms at the expense of interpretability and analytical expression for the expected accident frequency depending on the predictors .In the pre-processing phase we transform some of the telematics information as risky events recorded as part of the dynamic predictors zjit. This is done similar to Guillen et al. where near-miss events (based on hard brakingdaytime). On the other side, we consider event counts. For example, the sum of excess speed occurrence by type of road.A simple approach to calculating the impact on the expected frequency of accidents of behavioral and contextual predictor is provided. In order to convert the occurrence of telematics risky events or dangerous distance driven into a simple scoring, we may consider a linear probability model specification or a more general input function hxkiCzjit), which may later be linearized. This linear approximation may not be necessary if we only aim at producing a risk score to inform the driver. However, a linear formulation provides a straightforward way to design usage-based insurance schemes that are easy to communicate.When risky events have a direct linear impact, an insurance rate per week can be expressed as a flat rate, plus some additional Table 1 Accident risk scoring formulae for static and dynamic predictive factors with distance driven as exposure.Risk scoring formula specificationCommunicationh xki† Expected accident frequency depends on a function of driver characteristics onlyDith xki† Expected accident frequency is proportional to current period distance driven times a combination of driver characteristicsDithxkiCzjit)Expected accident frequency is proportional to current period distance driven times a combination of driver characteristics and dynamic factorsM. Guillen et al. Heliyon 10 (2024) e365015charges for distance driven and risky event occurrence. Charges can be homogeneous or depending on contextual data, so the cost can vary depending on context and behavioral information. For example, if distance is driven exceeding speed limits, in the weekend, in the night or in congestion areas (urban driving), the impact on accident risk and the final cost, differs from driving without speeding events, during weekdays, during the day and in non-urban areas.Several possibilities for static and dynamic scoring are presented in Table 1. Note that even if Table
from driving without speeding events, during weekdays, during the day and in non-urban areas.Several possibilities for static and dynamic scoring are presented in Table 1. Note that even if Table 1 only aims at modelling accident frequency, usage-based insurance schemes can follow directly from frequency models, once average cost and general in-surance charges are imputed proportionally to expected frequencies. Table 2 presents possible linearized specifications of driving scores.Our results explore basic classical generalized linear models. Similar conclusions can be found for other possibilities. Poisson models with a log-link were estimated using SAS software and R software. The link in the Poisson model equals hxkiCzjit)ˆexp⌊⋃Kkˆ1αkxki‡⋃Jjˆ1βjzjit⌋BLogistic regression and linear probability models are estimated too. Their corresponding links are hxkiCzjit)ˆ1E⌈1‡exp\⌊⋃Kkˆ1αkxki‡⋃Jjˆ1βjzjit⌋week. When a driver does not drive for one week, that week is excluded from the sample. We also observe that, on average, 20.29 km/week are travelled in the night. The weekly number of speed events (in any type or road) is 3.19, with a maximum of 61. In urban roads the weekly number of speed events is 1.847, with a maximum of 23. Note that the mean weekly frequency of 0.001 corresponds to an expected level of annual claim rate for at-fault accidents (0.001 multiplied by 52 weeks). This rate level is not surprisingly high, given that the portfolio is slightly biased, comprising predominantly novice and young drivers. We observe 0.117 % of the weekly obser -vations there is one at fault claim (in the remaining 99.883 % there is no claim), this corresponds to a yearly frequency of 4.80 %, which lies within the range of similar studies when only accidents at fault are being considered.Fig. 1 shows the evolution of mean distance driven for the drivers in this data set and Fig. 2 presents the frequency of at-fault claims over time. Fig. 3 shows histograms and bar charts of the variables in the data set. The sharp drop observed after the age of 35 in the sample can be attributed to the fact that Pay-as-You-Drive schemes were primarily marketed to young drivers, resulting in fewer older individuals participating in such pricing schemes. This demographic limitation should be acknowledged in our study. The small Table 2 Accident risk scoring linear formulae for static and dynamic predictive factors considering distance or log-distance driven.Linear risk scoring formula specification Communication⋃Kkˆ1αkxki‡γDit‡ 1γ†⋃Jjˆ1β2jzjitExpected accident frequency is approximated (or bounded for pricing purposed) by a static part that depends on a combination of driver characteristics plus a linear combination of distance driven and same-period dynamic factors⋃Kkˆ1αkxki‡γlnDit‡ 1γ†⌈⋃Jjˆ1β2jzjit‡⋃Llˆ1β3lzlit1⌉Expected accident frequency is approximated (or bounded for pricing purposed) by a static part that depends on a combination of driver characteristics plus a linear
1γ†⌈⋃Jjˆ1β2jzjit‡⋃Llˆ1β3lzlit1⌉Expected accident frequency is approximated (or bounded for pricing purposed) by a static part that depends on a combination of driver characteristics plus a linear combination of log-distance driven and same- and previous- period dynamic factorsM. Guillen et al. Heliyon 10 (2024) e365016increase observed after the 95th percentile for driving in urban areas reflects the presence of drivers who primarily use their vehicles within their own city, without venturing onto highways or national interurban roads. While this does not constitute a limitation, it is an important aspect that merits discussion in our analysis.Table 3 Variable definition in telematics weekly data set, Spain 2019.Variable DescriptionVEHICLE_POWER Vehicle power (in Hp)AGE Age of the driverGENDER 1 ˆMale, 0 ˆFemaleTOTAL_DISTANCE_DRIVENMK Thousands of kilometers travelled during the weekKM_NIGHTMK Thousands of kilometers travelled in the night during the weekSPEED_EVENT Number of trips when the driver exceeded the posted speed limit on the road during the weekSPEED_EVENT_URBAN Number of trips when the driver exceeded the posted speed limit on an urban area during the weekPERC_URBAN Percentage of kilometers driven in urban roadsCLAIM_AT_FAULT Number of claims at fault during the weekTable 4 Descriptive statistics in telematics weekly data set, Spain 2019.Variable Mean Standard Deviation Minimum MaximumCharacteristics of the driverAGE 28.727 4.667 17.000 74.000Characteristics of the vehicleVEHICLE_POWER 102.625 29.876 34.000 450.000Telematics variables (referred to weeks)TOTAL_DISTANCE_DRIVENMK 0.229 0.204 0.000 5.974KM_NIGHTMK 0.020
Article Not peer-reviewed versionA Dual-Phase Framework for EnhancedChurn Prediction in Motor Insuranceusing Cave-Degree and Magnetic ForcePerturbation TechniquesEmmanuel Chai , Kennedy Hadullo , Kevin Tole * , Dorca Nyamusi StephenPosted Date: 24 September 2024doi: 10.20944/preprints202409.1820.v1Keywords: Churn Prediction; Insurance Motor; Algorithm; Cave degree; Magnetic ForcePreprints.org is a free multidiscipline platform providing preprint service thatis dedicated to making early versions of research outputs permanentlyavailable and citable. Preprints posted at Preprints.org appear in Web ofScience, Crossref, Google Scholar, Scilit, Europe PMC.Copyright: This is an open access article distributed under the Creative CommonsAttribution License which permits unrestricted use, distribution, and reproduction in anymedium, provided the original work is properly cited. ArticleA Dual-Phase Framework for Enhanced ChurnPrediction in Motor Insurance using Cave-Degree andMagnetic Force Perturbation TechniquesEmmanuel Chai1,, Kennedy Hadullo2,, Kevin Tole2,* and Dorca Nyamusi Stephen2,1Institute of Computing and Informatics, Technical University of Mombasa2Mathematics and Physics department, Technical University of Mombasa*Correspondence: ktole@tum.ac.keAbstract: This study presents a novel predictive model to address the churn problem in the motor insurancesector using a dual-phase framework. The first phase employs a cave-degree perturbation technique for effectivefeature selection, while the second phase applies the Magnetic Force Perturbation Technique (MFPT) to optimizethe search process and avoid local optima traps. Two metaheuristics are proposed: the Adaptive RandomForest-Assisted Large Neighborhood Feature Optimizer (ARALFO) and the Adaptive Random Forest ParticleSwarm Optimizer (ARFPSO) to enhance churn prediction accuracy. The model was evaluated on two real-worldmotor insurance datasets, achieving a 95% accuracy rate and outperforming state-of-the-art algorithms. Anablation study confirmed the significant impact of the cave-degree and MFPT techniques in boosting predictiveperformance.Keywords: churn prediction; insurance motor; algorithm; cave degree; magnetic force1. IntroductionInsurance companies face high churn rates, particularly in motor insurance, where customerretention is critical due to intense market competition, claims management, and fluctuating premiumrates. This study focuses on identifying unique churn patterns in this sector. Customer churn, themovement of customers to other providers, is a significant challenge, as it is difficult to identify whichcustomers are at risk of leaving unless they proactively express dissatisfaction . With a large numberof policies that expire each month, it is impractical for insurers to contact all customers to confirmtheir intention to renew. However, by predicting which customers are likely to churn, insurancecompanies can focus their efforts on those most at risk, thus improving retention
customers to confirmtheir intention to renew. However, by predicting which customers are likely to churn, insurancecompanies can focus their efforts on those most at risk, thus improving retention strategies. Churnprediction approaches can be broadly categorized into two strategies: constructive (local) approachesand global approaches. The constructive approach targets individual customer behaviors and theirspecific interactions with the product or service, allowing for a more personalized intervention.In contrast, the global approach builds a comprehensive model that applies to the entire customerbase, identifying general patterns and trends predictive of churn. This approach often uses machinelearning techniques to identify general patterns and trends that predict churn.Machine learning algorithms (ML) have become the predominant tool for churn prediction due totheir ability to analyze large datasets and make accurate predictions. Common algorithms includelogistic regression, decision trees, random forests, support vector machines (SVM), and neural networks . Logistic regression is favored for its simplicity and interpretability,though it can be difficult with complex patterns. Regularization techniques such as L1 (Lasso) and L2(Ridge) are often used with logistic regression to prevent overfitting. Regularization of L1 penalizesthe absolute value of coefficients, effectively shrinking some to zero and leading to a sparse model thatretains only the most important features. L2 regularization, on the other hand, penalizes the square ofthe coefficients, reducing their magnitude evenly and improving model stability, particularly in thepresence of multicollinearity.Decision trees and random forests effectively handle nonlinear relationships and variable interac-tions but can be prone to overfitting, especially in high-dimensional data. To address this, optimizationDisclaimeror the editor(s). MDPI andpreprints202409.1820.v1© 2024 by the author(s). Distributed under a Creative Commons CC BY license. 2 of 22techniques like pruning, which removes insignificant branches, and ensemble methods such asbagging and boosting are used to enhance generalization and reduce overfitting . SVMs are pow-erful in managing high-dimensional spaces, providing clear margins between churn and non-churncustomers . However, they require careful tuning and are computationally demanding. SVMsbenefit from kernel tricks that map input data into higher-dimensional spaces, making it easier to findeffective separating hyperplanes.Neural networks, particularly deep learning models, excel at recognizing intricate patterns in largedata-sets but require significant computational resources and a large amount of labeled data. Theyare optimized using algorithms like stochastic gradient descent (SGD), Adam, and learningrate scheduling, which help accelerate convergence and improve performance. Adjustment ofhyperparameters, using methods such as grid search or random search, is crucial to
(SGD), Adam, and learningrate scheduling, which help accelerate convergence and improve performance. Adjustment ofhyperparameters, using methods such as grid search or random search, is crucial to optimize thepredictive accuracy of these models. The choice of algorithm and optimization techniques dependson the characteristics of the data and the desired balance between interpretability, precision, andcomputational efficiency .Unlike other research from previous work , this study aims to develop an efficient classificationmodel using ensemble optimized algorithms to identify clients at risk of churning and determine thetime until they churn. The research is structured around three primary objectives: identifying the keyfactors that contribute to customer churn in the insurance industry, building a conceptual model topredict churn, and evaluating the performance of this predictive model using various algorithms. Toachieve these objectives, the study addresses the following questions: What are the key factors drivingcustomer churn in the motor insurance industry? How will the conceptual model for predictingchurn be developed? And how will the model’s predictive performance be assessed using differentalgorithms?The methodology proposed for this study presents a structured framework to address the problemthrough a two-step process: the initialization phase and the optimization phase. The frameworkintroduces two algorithms: the Adaptive Random Forest-Assisted Large Neighborhood FeatureOptimizer (ARALFO) and the Adaptive Random Forest Particle Swarm Optimizer (ARFPSO). Duringthe initialization phase, ARALFO utilizes the Random Forest algorithm for feature selection, focusingon identifying the most relevant features that contribute to the predictive power of the model. Toenhance this selection process, we introduce a cave-degree perturbation which is a novel techniqueinspired by the term "cave", refers to the strategy where narrowing search spaces akin to focusing onmore promising features during iterative learning. It provides a mathematically weighted importancescore for each feature based on its impact on reducing impurity in decision trees, similar to techniqueslike LASSO but using forest-based methods. This technique measures each feature’s contribution tothe overall predictive model, allowing the identification of the most impactful features. By prioritizingfeatures with the highest cave degrees, the methodology reduces dimensionality, mitigates overfitting,and retains only the most informative features, thereby improving the feature selection process.Following feature selection, the optimization phase applies the Adaptive Large NeighborhoodSearch (ALNS) metaheuristic to find near-optimal solutions. This process iteratively breaks and repairsfeature subsets, preventing the algorithm from getting trapped in local optima. To further refinethis process, a Magnetic Force Perturbation Technique (MFPT) is introduced. The Magnetic ForcePerturbation
subsets, preventing the algorithm from getting trapped in local optima. To further refinethis process, a Magnetic Force Perturbation Technique (MFPT) is introduced. The Magnetic ForcePerturbation Technique (MFPT) is inspired by the interaction of magnetic fields, where forces eitherattract or repel solutions based on their relative quality (similat to charges in Coulomb’s law). Thisapproach helps the optimization process avoid local optima, ensuring exploration of more diverse andpromising regions in the search space. Unlike simulated annealing, which relies on temperature-basedrandomization blind search, MFPT adds directional perturbations, enhancing the model’s ability toexplore global optima.We also introduce an Adaptive Random Forest Particle Swarm Optimizer (ARFPSO), whichapplies the cave-degree concept during the feature selection phase. In the Particle Swarm Optimization(PSO) algorithm, each particle represents a potential solution and navigates the search space basedPreprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 September 2024 doi:10.20944/preprints202409.1820.v1 3 of 22on its velocity, influenced by its personal best (pbest) and the global best solution (gbest) identifiedso far. The Magnetic Force Perturbation Technique (MFPT) introduces controlled perturbations toparticle velocities, simulating attractive and repulsive forces analogous to magnetic interactions. Theseperturbations dynamically adjust the velocity of the particles, steering them away from local minima,and promoting the exploration of more promising regions in the search space. By modulating therelationship between pbest, gbest, and the velocity of particles, MFPT improves the balance betweenexploration and exploitation, improving the swarm’s ability to converge more effectively on globallyoptimal solutions.The proposed dual-phase strategy offers a comprehensive and systematic approach to the problem.The initialization phase establishes a solid foundation by identifying a precise set of predictive features,while the subsequent optimization phase refines and enhances this foundation to produce a robust andefficient classification model for churn prediction. The incorporation of novel perturbation techniqueswithin both phases further strengthens the model’s optimization, ensuring high predictive accuracyand effectiveness in identifying customer churn. This integrated approach improves the overallperformance and stability of the model.The principal contributions of our research are succinctly summarized as follows:I.Introducing a novel feature selection technique called cave-degree perturbation, which measuresthe impact of each feature on the predictive model. This technique enhances feature selection byprioritizing the most significant features and reducing dimensionality.II.Proposing a Magnetic Force Perturbation Technique that simulates magnetic forces to guide theoptimization process, thereby improving convergence and solution quality.The remainder of
a Magnetic Force Perturbation Technique that simulates magnetic forces to guide theoptimization process, thereby improving convergence and solution quality.The remainder of this paper is organized as follows. Section 2 introduces the formal formulationof the problem, while Section 3 provides a comprehensive overview of the proposed algorithms.Subsequently, Section 4 presents the experimental results and analyzes. Finally, Section 5 concludesthe paper with final remarks and suggestions for future research.2. Problem FormulationIn this section, Table 1 defines the variables used, followed by the introduction of the problem.We define the set of independent variables as X=x1,x2,x3, . . . , xn, where each xirepresents a specificcharacteristic or behavior of the policyholders. These features may include variables such as policytype, premium amount, claim history, customer tenure, and other relevant factors. The dependentvariable yindicates whether a customer churns ( ˆy=1) or not ( y=0). The goal is to predict customerchurn status based on these independent variables.Table 1. Variables of the StudySymbol DescriptionX Independent Variablesyi Actual churn status for customer i(1 if churned, 0 otherwise)ˆyi Predicted churn status for customer i(1 if predicted to churn, 0 otherwise)xij Feature jfor customer i(e.g., policy type)βj Coefficient for feature jin the logistic regression modeln Total number of customersp Total number of featuresλ Regularization parameter to prevent overfittingThe objective is to maximize the accuracy of churn prediction for motor vehicle insurance cus-tomers using a machine learning model subject to various constraints. The constraints of the modelinclude the feature, probability, prediction threshold, and regularization.Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 September 2024 doi:10.20944/preprints202409.1820.v1 4 of 22Feature ConstraintsEach feature xijmust be within its defined range:xminij<xij<xmaxij∀i,jProbability ConstraintsThe predicted probability of churn must be between 0 and 1:0<ˆyi<1∀iPrediction ThresholdDefine a threshold θto classify customers as churned or not churned:ˆyi=(1 if p(ˆyi=1|xi)>θ0 if p(ˆyi=1|xi)≤θRegularization ConstraintTo control the complexity of the model, incorporate a regularization term:p∑j=1βj<λObjective FunctionThe objective function is to maximize the accuracy of the churn prediction model to the totalnumber of instances:Accuracy =1NN∑i=1∥(ˆyi=yi)∥where ∥(ˆyi=yi)∥is the indicator function that returns 1 if the condition is true and 0 otherwise.The formulation of the function maximizes:1NN∑i=1∥(ˆyi=yi)∥subject to:xminij<xij<xmaxij∀i,jto achieve better accuracy or other performance metrics.3. Proposed MethodThis section introduces our proposed method, which is designed to address the problem at handthrough a structured framework consisting of two distinct phases: the initialization stage and theoptimization phase. The first phase, known as the initialization stage,
to address the problem at handthrough a structured framework consisting of two distinct phases: the initialization stage and theoptimization phase. The first phase, known as the initialization stage, focuses on feature selection. Thisstage is crucial to identify the most relevant features of the dataset that will contribute to the predictivepower of the model. Feature selection aims to reduce dimensionality, improve model performance,and prevent overfitting by selecting a subset of the most informative features. The goal is to retainfeatures that have the highest impact on the target variable while discarding those that are redundantor irrelevant.Following the initialization stage is the optimization phase. This phase involves applying ad-vanced algorithms and optimization techniques to enhance the model’s accuracy, efficiency, androbustness. The interaction between these two phases ensures a comprehensive approach to problemsolving. The feature selection phase lays the groundwork by providing a well-defined set of features,Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 September 2024 doi:10.20944/preprints202409.1820.v1 5 of 22while the optimization phase uses this foundation to achieve superior model performance. Together,these phases form an integrated framework that systematically addresses the challenges of featureselection and model optimization, aiming to deliver a robust and efficient solution.In summary as illustrated in Figure 8, our proposed method is designed to provide a balanced andeffective approach to tackling the problem by combining a thoughtful selection of features with a rigor-ous optimization process. This dual-phase framework is intended to enhance the overall effectivenessof the model and ensure that it performs optimally across various scenarios and conditions.Feature Selection PhaseSTARTOptimization PhaseENDData Pre-processingAn improved Solution Initial Solution Initialization Cleaned DataFigure 1. Proces Phase Framework3.1. Feature Selection and Decision-Making in Random ForestsThe process begins with the feature selection phase, where the dataset undergoes a thoroughpreprocessing stage, as shown in Figure A of the combined visualization. Here, the preprocesseddataset is passed into multiple decision trees (Tree 1, Tree 2, and Tree 3), each of which independentlyevaluates the importance of different features. This process involves data cleaning, handling missingvalues, and addressing inconsistencies to ensure the dataset is suitable for model training.Within this ensemble of decision trees, each feature xicontributes differently to the decision-making process, and the importance of these features is evaluated at each split within the trees. Theconcept of a "cave degree" operator is introduced to measure the relevance of each feature based on itscontribution to reducing impurity at the nodes of the decision trees. Let Cirepresent the cave degreeof feature xi, which is computed by averaging the
the relevance of each feature based on itscontribution to reducing impurity at the nodes of the decision trees. Let Cirepresent the cave degreeof feature xi, which is computed by averaging the importance scores Importance (xi,t)across all treesin the Random Forest ensemble:Ci=1ntreesntrees∑t=1Importance (xi,t)The cave degree captures how much a feature contributes to the model’s predictive performance,and features with higher cave degrees are considered more significant for the overall decision-makingprocess.In Figure A, the decision trees visually highlight key decision nodes (yellow circles), representingwhere significant feature-based decisions are made within each tree. These nodes signify the pointswhere features with higher cave degrees are actively influencing the tree’s decisions.Moving to Figure B, we observe the aggregated output of the Random Forest model, where theindividual decisions from Tree 1, Tree 2, and Tree 3 are combined to form a unified decision boundary.The dashed lines in different colors (blue, green, and red) represent the decision boundaries of eachindividual tree. The solid black line shows the overall decision boundary of the Random Forest model,which is the average of the decisions made by all trees, demonstrating the ensemble’s ability to createa more robust and accurate predictive model.The orange dotted line represents the sigmoid function S(x) =11+e−x, which converts the com-bined output of the decision trees into a smooth probabilistic prediction ranging between 0 and 1.The sigmoid function plays a crucial role in transforming the raw decision outputs into probabilities,making the model suitable for classification tasks where outputs are interpreted as probabilities ofbelonging to a particular class.An important aspect of this visualization is the inflection point marked by the purple star onthe graph in Figure B. This point corresponds to x=0on the sigmoid function, indicating where theprobability shifts most rapidly. It signifies the threshold where the model’s confidence in the predictionchanges most dramatically, thus serving as a critical decision boundary in classification tasks.Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 September 2024 doi:10.20944/preprints202409.1820.v1 6 of 22Therefore, by applying the cave degree operator and incorporating the sigmoid transformation,the Random Forest model efficiently selects and weights the most relevant features, leading to im-proved accuracy and robustness in its predictions. This is further validated by the combined decisionboundaries shown in Figure B, where the ensemble of decision trees collectively contributes to arefined, probability-based prediction output.Figure 2. (A) Preprocessed Dataset Feeding into Random Forest Decision Trees; (B) Combined DecisionBoundaries with Sigmoid Function and Inflection PointAs depicted in Algorithm 1, the feature selection process in a Random Forest is guided by thecave degree values. By assigning
(B) Combined DecisionBoundaries with Sigmoid Function and Inflection PointAs depicted in Algorithm 1, the feature selection process in a Random Forest is guided by thecave degree values. By assigning weights to features based on their normalized cave degrees,˜Ci=Ci∑pj=1Cj,where pis the total number of features, the Random Forest model prioritizes the most relevantfeatures. This ultimately enhances the model’s predictive power, as the most important features aregiven more weight, leading to superior performance in both classification and regression tasks.Algorithm 1: Feature Selection Using cave Degree in Random ForestInput: Historical data (X,Y), Number of trees ntrees, cave degree threshold τ;Output: X selected : selected features with assigned weights;foreach feature i in X doCalculate the cave degree Cias:Ci=1ntreesntrees∑t=1Importance (xi,t)where Importance (xi,t)is the importance score of feature xiin tree tof the RandomForest;endNormalize the degree of cave for each feature i:˜Ci=Ci∑pj=1Cjwhere pis the total number of features;Initialize Xselected ←∅;foreach feature i in X doif˜Ci>τthenAssign weight wi=˜Cito feature i;Add feature iwith weight witoXselected ;endendreturn X selectedPreprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 September 2024 doi:10.20944/preprints202409.1820.v1 7 of 22Ci=1ntreesntrees∑t=1Importance (xi,t),where ntrees is the total number of trees in the ensemble. To normalize the cave degrees, we use:˜Ci=Ci∑pj=1Cj,where ˜Ciis the normalized cave degree of feature xi, and pis the total number of features. Thisnormalization ensures that the cave degrees are on a comparable scale. Features with a normalizedcave degree ˜Ciabove a certain threshold τare selected, as they are considered the most relevant topredict customer churn. A higher cave degree indicates a greater importance of the feature, making ita better candidate for inclusion in the predictive model.3.2. Optimization PhaseThis section introduces the second phase of the optimization process, which involves employingtwo well-known metaheuristic algorithms. Adaptive Large Neighborhood Search (ALNS) and ParticleSwarm Optimization (PSO). To further enhance the optimization process, we propose a novel strategycalled the Magnetic Force Perturbation Technique inspired by the natural behavior of magnetic fields,where forces attract or repel particles depending on their polarities. This phenomenon can be modeledmathematically using Coulomb’s law for electric forces or the law of magnetic interaction. Analogously,in optimization, we simulate these forces to guide the exploration of the solution space. If we denotethe current solution by xand potential solutions by y, the force Fxyexerted on xbyycan be expressedas:Fxy=kqxqy∥y−x∥2where kis a constant, qxandqyrepresent the "charges" or qualities of solutions xandyrespectively,and∥y−x∥is the Euclidean distance between the solutions. Solutions xclose to promising regions,which have higher quality (or "charge"),
the "charges" or qualities of solutions xandyrespectively,and∥y−x∥is the Euclidean distance between the solutions. Solutions xclose to promising regions,which have higher quality (or "charge"), experience an attraction force pulling them towards thoseregions. In contrast, solutions in less favorable regions experience repulsion.The perturbation applied to the current solution xis based on the calculated forces. Let P(x)represent the perturbation applied to x. The perturbation is a function of the force Fxyand can bedefined as:P(x) =αFxywhere αis a control parameter that modulates the magnitude of the perturbation. Larger per-turbations explore new regions of the solution space, while smaller perturbations refine solutions inpromising areas.The optimization process is iterative. In each iteration, the solutions are adjusted based on thesimulated magnetic forces. If we denote the state of the search process at iteration tbyxt, the updatedsolution xt+1is given by:xt+1=xt+P(xt)The forces Fxycan be adapted based on the search progress. For instance, as the algorithmconverges, the forces might be adjusted to focus more on local refinement or global explorationdepending on the current state of the search, adapting the parameter αaccordingly. This dynamicadjustment helps in balancing exploration of new areas with exploitation of known promising regions,thus improving the efficiency of the optimization process.Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 September 2024 doi:10.20944/preprints202409.1820.v1 8 of 223.2.1. Adaptive Large Neighborhood Search (ALNS) AlgorithmAdaptive Large Neighborhood Search (ALNS) is a powerful metaheuristic optimization techniquedesigned to tackle complex combinatorial problems. The core idea behind ALNS is to iteratively explorelarge neighborhoods of the current solution by applying various perturbation and repair strategies.This allows the algorithm to escape local optima and efficiently search through a broader solutionspace. ALNS adapts its neighborhood structures based on the observed performance, dynamicallybalancing exploration and exploitation throughout the search process.In this context, ALNS is utilized to address the problem of churn prediction by optimizing featureselection. By incorporating advanced techniques such as magnetic force perturbation and strategicsolution modifications (i.e., break and repair), ALNS can enhance the search for the most effectivefeature subsets. The magnetic force perturbation guides the search process by simulating forces thatattract or repel solutions based on their quality, thereby improving the exploration of promisingregions and avoiding less favorable ones. The following algorithm presents the integration of ALNSwith magnetic force perturbation and breakpreprints202409.1820.v1 9 of 22Figure 3. ALNS search trajectory within a solution space. This figure represents the search trajectory ofthe ALNS algorithm through the solution space. The blue lines with
9 of 22Figure 3. ALNS search trajectory within a solution space. This figure represents the search trajectory ofthe ALNS algorithm through the solution space. The blue lines with arrows indicate the path taken bythe algorithm in navigating from one solution to another. The red, blue, and green points correspondto different evaluated solutions, where red indicates potential solutions, blue represents neighborhoodsolutions, and green signifies the best or final solutions found. The black star marks the optimal ormost favorable solution identified during the search process, showcasing how ALNS dynamicallyexplores and exploits the solution space.Figure 4. Optimized ALNS Path Exploration with Distinct Solution Mapping. This figure illustratesthe trend of a performance metric over iterations or time steps as the ALNS algorithm progresses. Theorange line represents the smoothed trend of the metric being tracked, while the colored points indicateindividual data points at each step, iteration, or trial. The peaks and troughs in the line show periodsof high and low performance, respectively, reflecting the oscillatory and converging behavior of thealgorithm as it seeks optimal solutions.Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 September 2024 doi:10.20944/preprints202409.1820.v1 10 of 22Algorithm 2: Optimization Using ALNS with Magnetic Force PerturbationInput: Feature set Xselected , Historical data (Xselected ,Y), Number of iterations Niter, Controlparameter α;Output: Optimal feature set xNiter, Best churn prediction accuracy, and other performancemetrics;Initialization: Initialize the solution xas a random configuration of features from Xselected ;Initialize the best solution xbestasx;Initialize the best accuracy Abestand other performance metrics to a very low value;foriteration t =1toNiterdoGenerate a neighborhood solution yby applying perturbations based on magnetic forces;Calculate the magnetic force between xand y:Fxy=kqxqy∥y−x∥2Apply Break and Repair:xbroken =Break (x)xrepaired =Repair (xbroken ,y,Fxy)Update the current solution :xt+1=xrepairedEvaluate the churn prediction accuracy and other performance metrics for xt+1;LetAt+1be the churn prediction accuracy and Mt+1be other performance metrics;IfAt+1is better than Abestor if Mt+1is better than the corresponding best metric :• Update xbesttoxt+1;• Update AbesttoAt+1;• Update other performance metrics to Mt+1;endReturn: Optimal feature set xbest, best churn prediction accuracy Abest, and other bestperformance metrics;3.2.2. Particle Swarm Optimization Algorithm (PSO)Particle Swarm Optimization (PSO) is a computational method inspired by the social behaviorof birds and fish. Developed by Kennedy and Eberhart in 1995 , PSO is a heuristic optimizationtechnique that simulates the social interaction of a swarm of particles in a search space to find optimalsolutions to complex problems. In PSO, each particle represents a potential solution and movesthrough the search space
the social interaction of a swarm of particles in a search space to find optimalsolutions to complex problems. In PSO, each particle represents a potential solution and movesthrough the search space influenced by its own experience and that of its neighbors. Particles adjusttheir positions based on their own best-known position and the best-known positions of the swarm,guided by a velocity update mechanism. This process continues iteratively, with particles convergingtowards the optimal solution over time.As shown in Algorithm 3, the PSO algorithm begins with initializing a set of particles, whereeach particle iis represented by a feature configuration xiselected from the set Xselected . Each particlehas an associated velocity vi, and the personal best solution xi,pbest is initialized to xi. The global bestsolution xgbest is set to the best personal best among all particles. The best accuracy Abestand otherperformance metrics are initialized to very low values.Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 September 2024 doi:10.20944/preprints202409.1820.v1 11 of 22For each iteration tfrom 1 to Niter, the algorithm updates the velocity and position of each particle.The velocity vi,t+1of particle iis updated based on its previous velocity, its distance from its personalbest xi,pbest , and its distance from the global best xgbest:vi,t+1=wvi,t+c1r1(xi,pbest−xi) +c2r2(xgbest−xi)where wis the inertia weight, c1andc2are cognitive and social coefficients, and r1andr2are randomnumbers uniformly distributed in the range . The position xi,t+1of the particle is then updatedusing:xi,t+1=xi+vi,t+1Following the velocity and position update, the algorithm applies magnetic force perturbation.The magnetic force Fi,gbest between the current position of the particle iand the global best positionxgbest is calculated as:Fi,gbest =kqiqgbest∥xgbest−xi∥2where kis a constant and qiandqgbest represent the "ch
Citation: Poufinas, Thomas, PeriklisGogas, Theophilos Papadimitriou,and Emmanouil Zaganidis. 2023.Machine Learning in ForecastingMotor Insurance Claims. Risks 11:164. risks11090164Academic Editor: Shengkun XieReceived: 11 August 2023Revised: 11 September 2023Accepted: 13 September 2023Published: 18 September 2023Copyright: © 2023 by the authors.Licensee MDPI, Basel, Switzerland.This article is an open access articledistributed under the terms andconditions of the Creative CommonsAttribution (CC BY) license (https:creativecommons.orgby).risksArticleMachine Learning in Forecasting Motor Insurance ClaimsThomas Poufinas, Periklis Gogas * , Theophilos Papadimitriou and Emmanouil ZaganidisDepartment of Economics, Democritus University of Thrace, 69100 Komotini, Greece;tpoufina@econ.duth.gr (T.P .); papadimi@econ.duth.gr (T.P .); ezaganid@econ.duth.gr (E.Z.)*Correspondence: pgkogkas@econ.duth.grAbstract: Accurate forecasting of insurance claims is of the utmost importance for insurance activityas the evolution of claims determines cash outflows and the pricing, and thus the profitability, ofthe underlying insurance coverage. These are used as inputs when the insurance company draftsits business plan and determines its risk appetite, and the respective solvency capital required(by the regulators) to absorb the assumed risks. The conventional claim forecasting methods attemptto fit (each of) the claims frequency and severity with a known probability distribution function anduse it to project future claims. This study offers a fresh approach in insurance claims forecasting.First, we introduce two novel sets of variables, i.e., weather conditions and car sales, and second,we employ a battery of Machine Learning (ML) algorithms (Support Vector Machines—SVM, DecisionTrees, Random Forests, and Boosting) to forecast the average (mean) insurance claim per insured carper quarter. Finally, we identify the variables that are the most influential in forecasting insuranceclaims. Our dataset comes from the motor portfolio of an insurance company operating in Athens,Greece and spans a period from 2008 to 2020. We found evidence that the three most informativevariables pertain to the new car sales with a 3-quarter and 1-quarter lag and the minimum temperatureof Elefsina (one of the weather stations in Athens) with a 3-quarter lag. Among the models tested,Random Forest with limited depth and XGboost run on the 15 most informative variables, and theseexhibited the best performance. These findings can be useful in the hands of insurers as they canconsider the weather conditions and the new car sales among the parameters that are considered toperform claims forecasting.Keywords: insurance; claims; forecasting; machine learningJEL Classification: G22; C531. IntroductionInsurance is the activity by which an individual or enterprise exchanges an uncertain(financial) loss with a certain (financial) loss. The former is the outcome of an event forwhich the insured individual or enterprise
activity by which an individual or enterprise exchanges an uncertain(financial) loss with a certain (financial) loss. The former is the outcome of an event forwhich the insured individual or enterprise has received coverage via an insurance policy;the latter is the premium that the insured has to pay to receive this coverage. When suchan event occurs, the insured may formally request coverage (monetary or in-kind) in linewith the policy terms and conditions, which constitutes the insurance claim.It is therefore clear that claims are key components of the insurance activity as theyessentially comprise the realization of the insurance product/service. Due to the uncertaintyof (future) claims occurrence, it is in the interest of the insurers to carefully frame their claimsexpectations and provisions. Consequently, they pursue claims forecasting. The accurateforecasting of insurance claims is important for several reasons.First, claims constitute the basis of pricing. In insurance, contrary to other services,the validity of the pricing is confirmed, and the adequacy of the premium is provedonly after the experience has been recorded. Traditional pricing is based on historicaldata; however, it is the occurrence of incidents in the future that determines whether theestimated burning cost was correct or not. Hence, if the claims experience has not beenRisks 2023 ,11, 164. Risks 2023 ,11, 164 2 of 19properly embedded in the pricing models, the (pure) premium may not be sufficient tocover the total claims (incurred or paid) and this could lead to a loss-making activity—ifthe premium charged is too low. In contrast, it could result in the loss of customers—in thecase where the premium charged is too high.Second, future claims occurrence is important for the compilation of the business planas claims affect the future profitability of the company. In fact, the claims experience isprobably the most significant determinant of the operational profitability of the insurancecompany. This is due to the fact that when compiling the business plan, an insurance com-pany projects the future premia and the future claims over a period of years. Future premiaare based primarily on sales forecasts, the evolution of inflation (ideally the one relatedto the insurance coverage under examination), as well as the projected claims experience.Expected future claims are based on the historical claims experience as well as on assump-tions on the development of claims; this may be decomposed to the development of claimsfrequency and severity.Finally, having a forward look in claims is a prerequisite of their risk and solvencyassessment process and report, which depicts the risk appetite of the insurer and thusthe capital required for the solvency of the insurer. As a matter of fact, it usually requires(one of) the biggest portions of capital (allocations). Indeed, insurers assume the risksthat individuals and enterprises want to transfer, hedge, or mitigate. A claim is filedwhen a covered
requires(one of) the biggest portions of capital (allocations). Indeed, insurers assume the risksthat individuals and enterprises want to transfer, hedge, or mitigate. A claim is filedwhen a covered event (the assumed risk) has occurred. A higher risk appetite indicatesthe assumption of higher risk and thus higher claim anticipation. This leads to higher(economical) capital required for the absorption of this risk.The conventional forecasting approaches attempt to either repeat the historical (growth)pattern of claims in the future–with potential seasonality and respective premia considered–or match the claims frequency and severity experience of the insurance company witha known probability distribution function. Smaller claims exhibit higher frequency, whereaslarge claims have a (much) smaller frequency. To improve the precision of the forecast-ing, large claims are pooled separately from the small claims and different probabil-ity distribution functions are used to best fit the claims frequency and severity of thetwo pools of claims.Machine Learning approaches offer an alternative route to claims forecasting. The con-tribution of ML (artificial intelligence—AI) in insurance globally and in claims predictionspecifically has been recognized by practitioners—who have spotted a wide range of MLapplications in insurance—spreading over almost all its processes, such as claims pro-cessing, claims fraud detection, claims adjudication, claim volume forecasting, automatedunderwriting, submission intake, pricing and risk management, policy servicing, insurancedistribution, product recommendationcustomerretention/lapse management, speech analytics, customer segmentation, workstream bal-ancing for agents, and self-servicing for policy management (Seely 2018 and Somani 2021).A report from the Organization for Economic Cooperation and Development (OECD 2020)subscribes to this point of view as it identifies the increasing number of ML (AI) applica-tions in insurance, which are enabled through the widespread collection of big data andtheir analysis. The report pinpoints marketing, distribution and sales, claims (verificationand fraud), pricing, and risk classification as broader areas of ML utilization. It further ad-dresses some attention points, such as policy and regulation with regards to the use of MLin insurance, with emphasis among others in privacy and data protection, market structure,risk classification, and explainability of ML. The implementation of ML (AI) methods inthese sectors of the insurance operations, along with the relevant worries on ethical andsocietal challenges have been recorded by Grize et al. (2020), Banks (2020), Ekin (2020),and Paruchuri (2020). The reports of Deloitte (2017), SCOR (2018), Keller et al. (2018), andBalasubramanian et al. (2021) identify similar applications of ML as they pave the futureof insurance. Risks 2023 ,11, 164 3 of 19In this paper, we employ a series of Machine Learning algorithms (Support VectorMachines–SVM,
identify similar applications of ML as they pave the futureof insurance. Risks 2023 ,11, 164 3 of 19In this paper, we employ a series of Machine Learning algorithms (Support VectorMachines–SVM, Decision Trees, Random Forests, and Boosting) to forecast the average(mean) insurance claims amount per insured car per quarter and identify a subset ofvariables that are the most relevant in determining the average claims amount. The claimsdata come from the motor portfolio of an insurance company (operating in Athens, Greece)for the period between 2008–2020.This approach is novel as it investigates the impact of two new-to-the-literature sets ofvariables, namely variables relevant to weather conditions and car sales, on the evolutionof motor insurance claims with the use of ML techniques to forecast motor insuranceclaims. More specifically, insurers attempt to forecast motor insurance claims based ontheir own experience, which depends on the particulars of the vehicle and the driver.However, there is a third component recorded as “road” (Norman 1962; Dimitriou andPoufinas 2016). “Road” describes (environmental) factors such as time of the day, day ofthe week, weather conditions, type of road design and surface, lighting and visibility, etc.It is essentially a set of factors that refer to all factors that can affect the incidence of roadtraffic accidents other than the factors that are relevant to the driver (road user) and thevehicle. “Road” encompasses all factors that are not captured by the driver and the vehicle.Driver, vehicle, and “road” are essentially sets of factors. Consequently, “road” captures(among others) the condition of the terrain, which is impacted by the weather conditions(among other parameters). Furthermore, “road” captures the road usage, which is affectedby the number of vehicles using it. This is, in turn, impacted by the new and used car sales.As a result, we feel we unveil the attributes of one important motor accident component,namely, “road”, which is novel in motor insurance claim forecasting.We trust this is useful in the hands of insurers as they now have an additional setof factors to perform motor claim forecasting. When performing motor claim forecasting,some insurers, among which the insurer that provided the dataset for this study, rely on theaddress that the insured vehicle is registered; hence, they do not consider the area wherethe accident took place. As a result, the “road” component is not captured. Our approachoffers a way to forecast motor insurance claims with the inclusion of two sets of parametersthat impact this component: weather conditions and car sales.2. Literature ReviewThe bulk of the literature on the applications of machine learning in insurance isrelatively recent (post 2019) and although they cover a wide range of topics relevant to theinsurance activity, there is ample room for further research. The main literature strands fo-cus on claims, reserving, pricing, capital requirements–solvency,
a wide range of topics relevant to theinsurance activity, there is ample room for further research. The main literature strands fo-cus on claims, reserving, pricing, capital requirements–solvency, coverage ratio, acquisition,and retention. We group them into two main categories; actuarial and risk management thatincorporates the first four (claims, reserving, pricing, and capital requirements–solvency)and customer management, which incorporates the last three (coverage ratio, acquisition,and retention). As the second category is not relevant to our study, we do not present itin detail. The interested reader may look at Mueller et al. (2018) for the coverage ratio;Boodhun and Jayabalan (2018) and Qazi et al. (2020) for acquisition; and Grize et al. (2020)and Guillen et al. (2021) for retention.The literature that is relevant to actuarial and risk management issues addresses themain functions of the insurance activity and is thus related to actuarial science and riskmanagement. In fact, insurance is the assumption and management of risks that individualsor enterprises wish to transfer or mitigate. These functions entail the monitoring of theclaimsRisksFauzan and Murfi (2018) focus on the forecasting of motor insurance accident claimsvia ML methods with an emphasis on missing data. Rustam and Ariantari (2018) use ML Risks 2023 ,11, 164 4 of 19approaches to predict the occurrence of motor insurance claims based on their claim history(with data stemming from an Indonesian motor insurer). Pesantez-Narvaez et al. (2019)attempt to predict the existence of accident claims with the use of ML techniques ontelematics data (coming from an insurance company) with an emphasis on driving pat-terns (total annual distance driven and percentage of distance driven in urban areas).Qazvini (2019) employs ML methods to predict the number of zero claims (i.e., claims thathave not been reported) based on telematics data (on French motor third party liability).Berm údez et al. (2020) apply ML approaches to model insurance claim counts with anemphasis on the overdispersion and the excess number of zero claims, which may be theoutcome of unobserved heterogeneity. Bärtl and Krummaker (2020) attempt to predictthe occurrence and the magnitude of export credit insurance claims with the use of MLtechniques. The models employed produce satisfactory results for the former but not sosatisfactory for the actual claim ratios—with accuracy, Cohen’s and R2were used to assessmodel performance. Knighton et al. (2020) focused on forecasting flood insurance claimswith ML models that applied hydrologic and social demographic data to realize that theincorporation of such data can improve flood claim prediction. Hanafy and Ming (2021) ap-ply ML approaches to predict the occurrence of motor insurance claims (over the portfolioof Porto Seguro, a large Brazilian motor insurer). Selvakumar et al. (2021) concentrated onthe prediction of the third-party liability (motor insurance) claim amount for
claims (over the portfolioof Porto Seguro, a large Brazilian motor insurer). Selvakumar et al. (2021) concentrated onthe prediction of the third-party liability (motor insurance) claim amount for different typesof vehicles with ML models (on a dataset derived from Indian public insurance companies).Some recent articles utilize the data collected through telematics. More specifi-cally, Duval et al. (2022) used ML models to come up with a method that indicates theamount of information—collected via telematics with regards to the policyholders’ drivingbehavior—that needs to be (optimally) retained by insurers to (successfully) perform motorinsurance claim classification. Reig Torra et al. (2023) also capitalized on the data providedby telematics and used the Poisson model, along with some weather data, to forecast the ex-pected motor insurance claim frequency over time. They found that weather conditions doaffect the risk of an accident. Masello et al. (2023) used the information collected via telem-atics and employed ML methods to assess the predictive ability of driving contexts (such asroad type, weather, and traffic) to driving risksoccurrence of accidents andthus motor insurance claims.Pesantez-Narvaez et al. (2021) compared the ability of ML models to detect rare events(on a third-party liability motor insurance dataset) to realize that RiskLogitboost regressionexhibits a superior performance over other methods. Shi and Shi (2022) employed MLapproaches on property insurance claims to develop rating classes and estimate rating rela-tivities for a single insurance risk; perform predictive modeling for multivariate insurancerisks and unveil the impact of tail-risk dependence; and price new products.In a different direction—that of fraud detection—P érez et al. (2005) applied ML ap-proaches (on a motor insurance portfolio) in a different context, which still pertained toclaims; they focused on the detection of fraudulent claims in motor insurance by properlyclassifying suspicious claims. Kose et al. (2015) employed ML approaches for the detectionof fraudulent claims or abusive behavior in healthcare insurance via an interactive frame-work that incorporates all the interested parties and materials involved in the healthcareinsurance (claim) process. On the same topic, Roy and George (2017) used ML methods todetect fraudulent claims in motor insurance. Wang and Xu (2018) employed ML modelsthat incorporate the (accident) information embedded in the text of the claims to detectpotential claim fraud in motor insurance. Dhieb et al. (2019, 2020) applied ML techniquesto automatically identify motor insurance fraudulent claims and sort them into differentfraud categories with minimal human intervention, along with alerts for suspicious claims.A series of papers implemented ML approaches in health management/insurance.Bauder et al. (2016) introduced ML approaches to tackle a different topic of insuranceclaims, thereby allowing them to spot the physicians
implemented ML approaches in health management/insurance.Bauder et al. (2016) introduced ML approaches to tackle a different topic of insuranceclaims, thereby allowing them to spot the physicians that post a potentially anomalousbehavior (pointing out misuse, fraud, or ignorance of the billing procedures) in health Risks 2023 ,11, 164 5 of 19(medical) insurance claims (with data taken from the USA Medicare system) and for whichadditional investigation may be necessary.Hehner et al. (2017) highlighted the merits of the introduction of ML (AI) in hospitalclaims management, which can be summarized as savings for both the insurers and theinsured as ML algorithms result in increased efficiency and well-informed decision-makingto the benefit of all interested parties. Rawat et al. (2021) applied ML methods to analyzeclaims and conclude on a set of factors that facilitate claim filing and acceptance. Cummingsand Hartman (2022) propose a series of ML models that provide insurers the ability toforecast Long Term Care Insurance (LTCI) claim rates and thus better their capacity tooperate as LTCI providers.2.2. ReservingBaudry and Robert (2019) developed a ML method to estimate claims reserves withthe use of all policy and policyholder covariates, along with the information pertainingto a claim from the moment it has been reported and compared their results with thosegenerated via chain ladder. Elpidorou et al. (2019) employed ML techniques to introduce anovel Bornhuetter–Ferguson method as a variant of the traditional chain ladder methodused for reserving in non-life (general) insurance through which the actuary can adjust therelative ultimate reserves with the use of externally estimated relative ultimate reserves.In the same direction, Bischofberger (2020) utilized ML methods to extend the chain laddermethod via the estimated hazard rate for the estimation of non-life claims reserves.The outperformance (in 4 out of 5 lines of business studied) of ML algorithms overtraditional actuarial approaches in estimating loss reserves (future customer claims) isevidenced by the work of Ding et al. (2020). Similarly, Gabrielli et al. (2020) explore themerit of the introduction of ML approaches to traditional actuarial techniques in improvingthe non-life insurance claims reserving (prediction).2.3. PricingGan (2013), in a comparatively early work, priced the guarantees (i.e., finds themarket value and the Greeks) of a large portfolio of variable annuity policies (generatedby the author) via ML techniques. Assa et al. (2019) used ML approaches to study thecorrect pricing of deposit insurance by improving the implied volatility calibration toavoid mispricing due to arbitrage. Grize et al. (2020) unveiled the role of ML algorithmsin (online) motor liability insurance pricing and, at the same time, increased the issueof interpretability. Henckaerts et al. (2021) capitalized on ML methods to price non-lifeinsurance products based on the frequency and severity of claims; their
at the same time, increased the issueof interpretability. Henckaerts et al. (2021) capitalized on ML methods to price non-lifeinsurance products based on the frequency and severity of claims; their results are superiorto the ones produced by the traditionally employed generalized linear models (GLMs).Kuo and Lupton (2020) explained that the wider adoption of ML techniques (over GLMs)in property and casualty insurance pricing depends very much on their reduced(perceived) transparency. They recommend increased interpretability to overcome thishurdle. These concerns are also addressed in Grize et al. (2020).Blier-Wong et al. (2020) performed a literature review on the application of ML meth-ods on the property and casualty insurance actuarial tasks and in pricing and reserving.They drafted potential future applications and research in the field and noticed that therecan be three main challenges: interpretability, prediction uncertainty, and potential discrim-ination.Some practitioner best practices have already been reported in the literature.AXA, for example, has applied ML methods to forecast large-loss car accidents to achieveoptimal motor insurance pricing (Sato 2017; Ekin 2020).2.4. Capital Requirements–SolvencyDíaz et al. (2005)—early enough compared to other studies—employed ML approachesto predict the insolvency of Spanish non-life insurance companies, which was applied on aset of financial ratios. Risks 2023 ,11, 164 6 of 19Krah et al. (2020) focused on the derivation of the solvency capital requirement thatlife insurers need to honor under the Solvency II directive in the European Union with theuse of ML methods, which are alternative to the approximation techniques that insurancecompanies use.Finally, Wüthrich and Merz (2023), in their book, presented the (entire) array oftraditional actuarial and modern machine learning techniques that can be applied toaddress insurance-related problems. They explained how they can be applied by actuariesor real datasets and how the derived results may be interpreted.As can be seen by the aforementioned literature review, our research is closer to themost recent articles of Reig Torra et al. (2023) and Masello et al. (2023), whose work hasmost likely been done in parallel with ours, as these papers were published in 2023. Still,our work maintains its novelty since (i) we use ML approaches compared to the work ofReig Torra et al. (2023), who employ the Poisson model (even though they also includeweather data in their model); and (ii) we use the weather conditionssafety.3. Data and VariablesAs the goal of this paper is to forecast the mean motor insurance claim cost, our datasetemploys the claims data of a motor insurance portfolio from Athens, Greece. The data spansa period from 2008 to 2020. The frequency of the variables in our dataset is constrainedby the availability of the data from the insurance company. Thus, we used a sample withquarterly frequency.Besides the claims data, our dataset consists of
the variables in our dataset is constrainedby the availability of the data from the insurance company. Thus, we used a sample withquarterly frequency.Besides the claims data, our dataset consists of the number of new car sales, importedused car sales in the greater region of Athens, the weather conditions as described by themaximum and minimum temperatures, the number of days that the temperature was belowzero (Celsius), and the number of rainy days for three geographical areas, where weatherstations are located in the broader region of Athens (Elefsina, Tatoi, and Spata). The choiceof the three locations was dictated by the availability of data; the weather is recorded inseveral more areas within the broader Athens region, though we discovered large periodswith no recorded values and, consequently, we were unable to include data from theseareas in our dataset.We have also included in our dataset four lags of each independent variable, as wellas the moving averages of order four (MA(4)) for the target variable, the number of newcars, and the number of imported used cars sold. The total number of observations is 48,while the total number of explanatory variables is 79 (16 meteorological variables with fourlags, the target variable, the number of new cars, and the number of imported used carssold with their 4 lags and their moving averages).The weather conditions data came from the Hellenic National MeteorologicalService—HNMS (2022); the new and imported used car sales came from the Association ofMotor Vehicles Importers Representatives—AMVIR (2022); and the claims data came fromthe motor insurance portfolio of the insurance company in Athens, Greece (who prefersnot to be disclosed). All data were retrieved by their providers after a formal request.Consequently, the dependent–target variable of our models is the mean (motor) in-surance claims amount per car per quarter. The independent variables are presented inTable 1 below: Risks 2023 ,11, 164 7 of 19Table 1. Independent Variables.Independent Variables Definition SourceMean InsuranceClaimscar t2Mean InsuranceClaimscar t4The average claim amount perinsured carThe motor insuranceportfolio of theinsurance companyNew cars t1 New cars t 2 New cars t 3 New cars t 4 New car sales AMVIR (2022)Used cars t1 Used cars t 2 Used cars t 3 Used cars t 4 Imported used car sales AMVIR (2022)Max Temp Elefsinat1Max Temp Elefsinat2Max Temp Elefsinat3Max Temp Elefsinat4The maximum temperature recorded atthe weather station of ElefsataHNMS (2022)Min Temp Elefsinat1Min Temp Elefsinat2Min Temp Elefsinat3Min Temp Elefsinat4The maximum temperature recorded atthe weather station of ElefsataHNMS (2022)Mean TempElefsina t1Mean TempElefsina t2Mean TempElefsina t3Mean TempElefsina t4The average temperature recorded at theweather station of ElefsataHNMS (2022)Max Temp Tatoit1Max Temp Tatoit2Max Temp Tatoit3Max Temp Tatoit4The maximum temperature recorded atthe weather station of TatoiHNMS (2022)Min Temp Tatoit1Min Temp
station of ElefsataHNMS (2022)Max Temp Tatoit1Max Temp Tatoit2Max Temp Tatoit3Max Temp Tatoit4The maximum temperature recorded atthe weather station of TatoiHNMS (2022)Min Temp Tatoit1Min Temp Tatoit2Min Temp Tatoit3Min Temp Tatoit4The matimum temperature recorded atthe weather station of TatoiHNMS (2022)Mean Temp Tatoit1Mean Temp Tatoit2Mean Temp Tatoit3Mean Temp Tatoit4The average temperature recorded at theweather station of TatoiHNMS (2022)Max Temp Spatat1Max Temp Spatat2Max Temp Spatat3Max Temp Spatat4The maximum temperature recorded atthe weather station of SpataHNMS (2022)Min Temp Spatat1Min Temp Spatat2Min Temp Spatat3Min Temp Spatat4The maximum temperature recorded atthe weather station of SpataHNMS (2022)Mean Temp Spatat1Mean Temp Spatat2Mean Temp Spatat3Mean Temp Spatat4The average temperature recorded at theweather station of SpataHNMS (2022)No of rainy daysElefsina t1No of rainy daysElefsina t2No of rainy daysElefsina t3No of rainy daysElefsina t4The number of days with rain recordedat the weather station of ElefsinaHNMS (2022)No of rainy daysSpata t1No of rainy daysSpata t2No of rainy daysSpata t3No of rainy daysSpata t4The number of days with rain recordedat the weather Station of SpataHNMS (2022)No of rainy daysTatoi t1No of rainy daysTatoi t2No of rainy daysTatoi t3No of rainy daysTatoi t4The number of days with rain recordedat the weather station of TatoiHNMS (2022)No of days belowzero Tatoi t1No of days belowzero Tatoi t2No of days belowzero Tatoi t3No of days belowzero Tatoi t4The number of days with a temperaturebelow zero recorded at the weatherstation of TatoiHNMS (2022)No of days belowzero Elliniko t1No of days belowzero Elliniko t2No of days belowzero Elliniko t3No of days belowzero Elliniko t4The number of days with a temperaturebelow zero recorded at the weatherstation of EllinikoHNMS (2022)No of days belowzero Elefsina t1No of days belowzero Elefsina t2No of days belowzero Elefsina t3No of days belowzero Elefsina t4The number of days with a temperaturebelow zero recorded at the weatherstation of ElefsinaHNMS (2022)No of days belowzero Spata t1No of days belowzero Spata t2No of days belowzero Spata t3No of days belowzero Spata t4The number of days with a temperaturebelow zero recorded at the weatherStation of SpataHNMS (2022)Moving AverageMean InsuranceClaims/carMoving AverageNew CarsMoving AverageUsed CarsThe moving average of the aforementioned variablesNote: Data and variables are on a quarterly basis. The time lag notation is as follows: t 1 denotes a 1-yearlag; t2 denotes a 2-year lag; t 3 denotes a 3-year lag and t 4 denotes a 4-year lag. Source: Created bythe authors.Figure 1 depicts the evolution of the mean insurance claims amount per insured carper quarter. One can observe a declining trend from 2010 to 2016 (with some seasonality onpeaks and troughs, especially after 2012), which is most likely attributed to a significantreduction in car activity during this period. This was the result of the Greek sovereign
some seasonality onpeaks and troughs, especially after 2012), which is most likely attributed to a significantreduction in car activity during this period. This was the result of the Greek sovereign debtcrisis that started in 2010 and resulted in strict austerity measures that greatly negativelyimpacted household income, consumption, and the GDP . Fuel prices increased significantlyafter a new tax on fuel was introduced, and car sales reached a minimum for the decade. Risks 2023 ,11, 164 8 of 19After 2017, the trend is slightly increasing until the end of 2019, which coincides with therecovery of the Greek economy from the debt crisis. In 2020 the trend is decreasing again,without the seasonal rebound at the end of the year, as noted in previous years. This is mostprobably due to the effect of the pandemic, although we need more recent data to determinewhether this assumption is valid. On the same figure we also illustrate the situation of theGreek economy: Unshaded areas represent periods of real GDP growth, while shaded areasrepresent periods of negative real GDP growth (real output contractions). There is a positivecorrelation between the insurance claims and real GDP . The relevant Pearson correlationcoefficient is ri,i=0.49. This correlation statistic is significant even at the 0.01 significancelevel, with a p-value of 0.000368.Risks 2023 , 11, x FOR PEER REVIEW 9 of 20 Figure 1. The time series of the mean insurance claims per insured car on a quarterly basis . In the background we depict the situation of the Greek economy: Unshaded areas represent periods of real GDP growth, while shaded areas represent periods of negative real GDP growth (real output con-tractions) . Source: Based on authors estimates with data from the motor insurance portfolio. We observe that the mean insurance claims exhibit some seasonality. More specifi-cally, there is a peak (local maximum) noted on an annual basis during the 4th quarter of each year. In fact, there is a V -shaped formation starting from the peak of the 4th quarter of the previous year, dropping to reach a trough (local minimum) during the 3rd quarter and rising to reach a peak during the 4th quarter of the year. This is most likely attributed to the fact that the insured tend to declare their claims towards ye ar-end and that the in-surers tend to settle/pay —even the claims that were declared earlier in the year —towards year-end. The only exception is 2009, which is most likely due to the financial crisis that hit the country in 2009 and because of which the pattern may have been disrupted. The peak has shifted towards the 1st quarter of 2010. A second, lower peak is observed in the 2nd quarter in 2010, which subscribes to this point of view. After that, the pattern resumes until 2017, where the peak appears a bit earlier, towards the end of the 3rd quarter, which is a small deviation from the seasonality observed. 4. Methodology Machine Learning was established in the 1950s to deliver the “Learning”
a bit earlier, towards the end of the 3rd quarter, which is a small deviation from the seasonality observed. 4. Methodology Machine Learning was established in the 1950s to deliver the “Learning” component on the Artificial Intelligence (AI) systems. The basic concept of Machine Learning is the automated analytical model building; it is the idea that systems can learn from the data, identify patterns, and make decisions with minimal human intervention. They can also automatically improve their performance through experience. This i s achieved by learn-ing patterns and relationships in the data. Historically, Machine Learning has relied on large datasets (Gogas and Papadi-mitriou 2021). This is the reason Machine Learning in economics was mainly applied to financial data, the subfield of economics with an abundance of data mainly due to the availability of very high frequencies —daily, hourly, or even seconds or tick -to-tick. To-wards the end of the 20th century, new algorithms were introduced, such as the Support Vector Machines and Random Forest coupled with Boosting and Bagging techniques, which achiev
Citation: Wilson, Alinta Ann, AntonioNehme, Alisha Dhyani, and KhaledMahbub. 2024. A Comparison ofGeneralised Linear Modelling withMachine Learning Approaches forPredicting Loss Cost in MotorInsurance. Risks 12: 62. https:doi.orgrisks12040062Academic Editor: Angelos DassiosReceived: 17 February 2024Revised: 27 March 2024Accepted: 28 March 2024Published: 31 March 2024Copyright: ©2024 by the authors.Licensee MDPI, Basel, Switzerland.This article is an open access articledistributed under the terms andconditions of the Creative CommonsAttribution (CC BY) license (https:creativecommons.orgby).risksArticleA Comparison of Generalised Linear Modelling with MachineLearning Approaches for Predicting Loss Cost inMotor InsuranceAlinta Ann Wilson1, Antonio Nehme1,*,†, Alisha Dhyani2,†and Khaled Mahbub11School of Computing, Birmingham City University, Birmingham B4 7RQ, UK;alinta.wilson@mail.bcu.ac.uk (A.A.W.); khaled.mahbub@bcu.ac.uk (K.M.)2National Farmers Union Mutual Insurance Society, Tiddington, Stratford-upon-Avon CV37 7BJ, UK;alisha_dhyani@nfumutual.co.uk*Correspondence: antonio.nehme@bcu.ac.uk†These authors contributed equally to this work.Abstract: This study explores the insurance pricing domain in the motor insurance industry, focusingon the creation of “technical models” which are essentially obtained after combining the frequencymodel (the expected number of claims per unit of exposure) and the severity model (the expectedamount per claim). Technical models are designed to predict the loss costs (the product of frequencyand severity, i.e., the expected claim amount per unit of exposure) and this is a main factor that is takeninto account for pricing insurance policies. Other factors for pricing include the company expenses,investments, reinsurance, underwriting, and other regulatory restrictions. Different machine learningmethodologies, including the Generalised Linear Model (GLM), Gradient Boosting Machine (GBM),Artificial Neural Networks (ANN), and a unique hybrid model that combines GLM and ANN, wereexplored for creating the technical models. This study was conducted on the French Motor ThirdParty Liability datasets, “freMTPL2freq” and “freMTPL2sev” included in the R package CASdatasets.After building the aforementioned models, they were evaluated and it was observed that the hybridmodel which combines GLM and ANN outperformed all other models. ANN also demonstratedbetter predictions closely aligning with the performance of the hybrid model. The better performanceof neural network models points to the need for actuarial science and the insurance industry to lookbeyond traditional modelling methodologies like GLM.Keywords: Generalised Linear Model (GLM); Gradient Boosting Machine (GBM); Artificial NeuralNetworks (ANN); frequency modelling; severity modelling; loss cost model1. IntroductionThe financial services industry, especially its most prominent and visiblemember—Insurance—is undergoing a rapid and disruptive change fueled by
severity modelling; loss cost model1. IntroductionThe financial services industry, especially its most prominent and visiblemember—Insurance—is undergoing a rapid and disruptive change fueled by advancedtechnologies and changing consumer needs. The insurance sector is on the brink of adigital revolution, transforming the way it conducts business, by leveraging advanceddata analytical capabilities as a primary driver Garrido et al. (2016). The amount of datagenerated by the industry is huge and all the companies realising the potential benefits thatdata analysis can bring to them are investing billions into enhancing their data analyticaltechniques.The insurance industry is unique because it manages the risks and uncertainties as-sociated with unforeseeable events. Insurance companies offer the service of coveringunpredictable events, as per their terms and conditions, in exchange for an annual sum (theinsurance premium) to be paid by the customer Poufinas et al. (2023). Instead of knowingthe precise cost up front, the pricing of insurance products is based on calculating theRisks 2024 ,12, 62. Risks 2024 ,12, 62 2 of 28prospective losses that could occur in the future Kleindorfer and Kunreuther (1999). Insur-ance firms have long used mathematics and statistical analysis to make these estimations.Statistical methods have been used in the different subdivisions of the insurance sector,including motor, life, and general insurance, to ensure that the premiums charged to thecustomers are enough to maintain the financial solvency of the company while meeting itsprofit targets.Motor insurance pricing is based on the risk factors of the policyholder, such as the ageof the driver, the power of the vehicle, or the address where the policyholder resides in thecase of auto insurance. Using these features, an actuary generates groups of policyholderswith corresponding risk assessments. In addition to the static risk factors, insurancecompanies take the vehicle use into account, including the frequency of use and the area,when finalising the risk premiums Vickrey (1968); Henckaerts and Antonio (2022). In thispaper, the focus is on motor insurance, but the findings are applicable to other divisionsdue to the common practices for determining the risk Wüthrich and Merz (2022).Uncertainty is the core of the insurance business, which makes stochastic modellingsuitable for forecasting various outcomes from different random attributes Draper (1995).Creating a stochastic model to facilitate decision-making and risk assessment in the in-surance industry requires the analysis of historical data for forecasting future costs for aninsurance policy. Data mining can be used to retrieve this vital information. Risk clusteringis an approach in data mining which can help create large, similar, and different groupswithin and between classes Smith et al. (2000).Actuarial science is a discipline that analyses data, assesses risks, and calculates proba-ble loss costs for
create large, similar, and different groupswithin and between classes Smith et al. (2000).Actuarial science is a discipline that analyses data, assesses risks, and calculates proba-ble loss costs for insurance policies using a range of mathematical models and methodsDhaene et al. (2002). Actuaries consider information from the past, including demographics,medical records, and other relevant factors to create precise risk profiles for specific peopleor groups that have a similar potential outcome. The main challenge for actuaries liesin the modelling of the frequency of claims. When a person applies for a car insurancepolicy, forecasting the frequency of claims plays is a major factor in the classification of thecandidate within a certain risk profile David (2015). Claims severity (the amount for eachclaim) is another factor that determines the exposure of the insurance company if a policy ifoffered Mehmet and Saykan (2015). Multiplying the number of claims (the frequency) withthe average amount per claim (the severity) reflects the cost of the claims on the insurancecompany. Combining frequency and severity models enables the prediction of loss costsGarrido et al. (2016). These predictions, however, are referred to as “technical models” inthe insurance industry, as they do not take into account other external factors including in-flation, regulatory constraints, and practices followed to insure customer retention. Factorssuch as expenses, investments, reinsurance, and other model adjustments and regulatoryconstraints assist in arriving at the final premium amount Tsvetkova et al. (2021); Shawarand Siddiqui (2019). Developing the best possible “technical models” is essential for aninsurance company to enable the prediction of appropriate insurance premiums Guelman(2012), and this is the focus of this paper.Typically, insurance companies rely on statistical models that enable having a level oftransparency required to justify the pricing for the regulatory body and customer whenneeded. Generalised Linear Model (GLM) is the leading technique that is used for pricingin the insurance industry due to its ease of interpretability and ability to give a clearunderstanding of how each predictor affects the outcome for pricing Zheng and Agresti(2000). GLM, however, suffers from several shortfalls, especially with the increase in theamount of data that are considered when building the models; these shortfalls include thefollowing:• The inefficiency of following stepwise regression, a practice used for factor selection,with the large dimension of the training data: the systematic testing of all possible com-binations of factors and their interactions is very demanding and does not guaranteesatisfying accuracy Smith (2018). Risks 2024 ,12, 62 3 of 28• The time inefficiency for updating the GLM models in light of new data, as commonpractices followed by actuaries require manual steps in the fine-tuning of the weightsof each factor.• The dependence of GLM on
inefficiency for updating the GLM models in light of new data, as commonpractices followed by actuaries require manual steps in the fine-tuning of the weightsof each factor.• The dependence of GLM on assumptions on the distribution of the data, which arenot always valid for datasets representing an unusual market, for example, due to acertain phenomenon impacting the behaviour of customers King and Zeng (2001).• The unsuitability of GLM for modelling non-linear complex trends Xie and Shi (2023).These factors are among the reasons that have driven insurance companies to startexploring machine learning techniques for pricing Poufinas et al. (2023). Machine learning,however, is only meant to support and not substitute GLM modelling due to the black boxnature of some machine learning algorithms and the opaqueness of the produced models,rendering the justification of the impact of every factor on the prediction to regulatorybodies difficult Rudin (2019). Actuaries are now exploring the use of machine learningtechniques to find a balance between the improved accuracy that can be achieved and thetransparency offered by GLM Lozano-Murcia et al. (2023). In this paper, we explore some ofthese techniques and compare their performance with GLM. We also used a hybrid modelthat factors in the prediction of the frequency and severity of claims from a GLM model asan attribute for the Artificial neural network model. The rest of this paper is structured asfollows: Section 2 discusses related work and highlights the need for this study; Section 3describes the dataset and discusses the theoretical background related to the different stepsof the knowledge discovery process; Section 4 elaborates on these steps and discusses thehyperparameter tuning of the models; Section 5 includes the results and discussion of thefindings, and Section 6 concludes the paper and gives directions for future work.2. Literature ReviewGeneralised Linear Modelling (GLM) methods have been a standard industry practicefor non-life insurance pricing for a long time now Xie and Shi (2023). The monographcreated by Goldburd et al. (2016) serves as a reference for the use of GLMs in classificationrate-making. They have built a classification plan from raw premium and loss data. Kafkováand Kˇ rivánková (2014) analysed motor insurance data using GLM and created a modelfor the frequency of claims. The performance of the models was compared using theAkaike information criterion (AIC) and Analysis of Deviance. However, AIC accounts forpenalising complexity, giving a disadvantage to models with a large number of relevantparameters. This model presents a relatively simple model and validates the significanceof three variables in the prediction process: the policyholder’s age group, the age of thevehicle, and the location of residency.Garrido et al. (2016) used generalised linear models to calculate the premium bymultiplying the mean severity, mean frequency, and a correction term intended to
and the location of residency.Garrido et al. (2016) used generalised linear models to calculate the premium bymultiplying the mean severity, mean frequency, and a correction term intended to inducedependency between these components on a Canadian automobile insurance dataset. Thismethod assumes a linear relationship between average claim size and the number ofclaims which may not hold in all cases Xie and Shi (2023). David (2015) also discussesthe usefulness of GLMs for estimating insurance premiums based on the product of theexpected frequency and claim costs. The authors of these papers, however, did not exploreother models that are suitable for more complex non-linear trends.Moreover, it has been established that the Poisson models are commonly employed inGLM approaches within the insurance industry to predict claim frequency. Throughoutthe literature, multiple authors have stated that the Poisson model is the main method forforecasting the frequency of claims in the non-life insurance sector Denuit and Lang (2004);Antonio and Valdez (2012); Gourieroux and Jasiak (2004); Dionne and Vanasse (1989). Forthe claim severity model, the literature asserts that the Gamma model is the conventionalapproach for modelling claim costs David (2015); Pinquet (1997).Xie and Shi (2023) stated that it is simpler to choose key features or measure the signif-icance of the features in linear models when compared to decision-tree-based techniquesor neural networks Xie and Shi (2023). However, they also asserted the fact that GLM Risks 2024 ,12, 62 4 of 28is unable to recognise complex or non-linear interactions between risk factors and theresponse variable. As these would affect the pricing accuracy, they stated the necessity ofconsidering alternate solutions, which includes the complicated non-linear models.Zhang (2021) performed a comparative analysis to evaluate the impact of machinelearning techniques and generalised linear models on the prediction of car insurance claimfrequency Zhang (2021). Seven insurance datasets were used in the extensive study, whichemployed the standard GLM approach, XGboost, random forest, support vector machines,and deep learning techniques. According to the study, XGboost predictions outperformedGLM on all datasets. Another recent study by Panjee and Amornsawadwatana (2024)confirmed that XGboost outperformed GLMs for frequency and severity modelling inthe context of cargo insurance. Guelman (2012) used the Gradient Boost Machine (GBM)method for auto insurance loss cost modelling and stated that while needing little data pre-processing and parameter adjustment, GBM produced interpretable results Guelman (2012).They performed feature selection then produced a GBM model capturing the complexinteractions in the dataset, resulting in higher accuracy than that obtained from the GLMmodel. Poufinas et al. (2023) also suggested that tree-based models performed better thanalternative learning techniques; their dataset, however, was limited to
than that obtained from the GLMmodel. Poufinas et al. (2023) also suggested that tree-based models performed better thanalternative learning techniques; their dataset, however, was limited to 48 instances andtheir results were not compared to GLM Poufinas et al. (2023).Also, in the existing literature, multiple papers supported the use of neural networkalgorithms in the insurance industry. Many experiments have been done on the use ofneural networks in insurance pricing and these studies concluded that these models re-sulted in a higher accuracy than traditional models like GLM Bahia (2013); Yu et al. (2021).In 2020, a study was conducted on French Motor Third-Party Liability claims, and meth-ods such as regression trees, boosting machines, and feed-forward neural networks werebenchmarked against a classical Poisson generalised linear model Noll et al. (2020). Theresults showed that methods other than GLM were able to capture the feature componentinteractions appropriately, mostly because GLMs require extended manual feature pre-processing. They also emphasised the importance of ‘neural network boosting’ where theadvanced GLM model is nested into a bigger neural network Schelldorfer et al. (2019).Schelldorfer et al. (2019) discussed the importance of embedding classical actuarial modelslike GLM into a neural net, known as the Combined Actuarial Neural Net (CANN) ap-proach Schelldorfer et al. (2019) . While doing so, CANN captures complex patterns andnon-linear relationships in the data, whereas the GLM layer accounts for specific actuarialassumptions. Using a skip connection that directly connects the neural network’s inputlayer and output layer, the GLM is integrated into the network architecture. This strategymakes use of the capabilities of neural networks to improve the GLM, and this is referredto as a hybrid model.Hybrid models are popular nowadays as they combine the output of multiple modelsand produce more accurate results compared to single models Ardabili et al. (2019). Thefundamental principle underlying hybrid modelling is to combine the outputs of vari-ous models in order to take advantage of their strengths and minimise their flaws, thusimproving the robustness and accuracy of predictions Zhang et al. (2019); Wu et al. (2019).From the literature review, it can be deduced that numerous publications supportthe traditional GLM model in the existing literature. GLM accounts for the popular andmost common data mining method in the insurance industry. It has been clear that theGLM models are still effective models because of their flexibility, simple nature, and ease ofimplementation. However, GLM cannot handle diverse datatypes and does not work wellin dealing with non-linear relationships in the data. Due to these drawbacks of the GLMmodel, there are multiple works in the literature that support the Gradient Boost Machines(GBM) model since it ensures model interpretability through its feature selection capabilities.The literature
there are multiple works in the literature that support the Gradient Boost Machines(GBM) model since it ensures model interpretability through its feature selection capabilities.The literature review also highlights the importance of the neural network approach, as itshows better and more reliable results than traditional models. The literature demonstratesthat hybrid models work more effectively than single models and suggests that combiningGLM and neural network performs better as it aids in maximising the advantages of Risks 2024 ,12, 62 5 of 28both techniques. This was proven by Schelldorfer et al. (2019), whose CANN approachshowed reliable results by capturing complex patterns and non-linear relationships in thedata, while also including actuarial assumptions specific to the insurance industry. Whilereviewing the literature, it was noted that the methods such as random forests and supportvector machines are not frequently used for the calculation of claim frequency and severity.This may be because these techniques require significant computational effort and training.Building on the findings from the literature, this study aims to explore the effectivenessof Combined Actuarial Neural Networks (CANN) in comparison with GLM, GBM, andartificial neural networks models. Compared to the work of Noll et al. (2020); Schelldorferet al. (2019), our work is the first to compare these four models together and discuss thefindings to help guide the actuarial community on finding the needed tradeoff betweenaccuracy and transparency. Our work covers twelve models with different sets of featuresand details every step of the Knowledge Discovery from Databases (KDD) process from thedata preparation and cleaning to the analysis and comparison of the results of our models.Compared to other work in the literature, this paper focuses on motor insurance pricingmodels, but the findings are extendable to other insurance subdivisions.3. Theoretical BackgroundThis section introduces the dataset and the various steps that are followed to buildthe model. The steps described below are aligned with the Knowledge Discovery fromDatabases (KDD) process, due to the significance and importance of this methodology forbuilding the best possible models Fayyad et al. (1996).3.1. Dataset Source and DescriptionThe outcomes of the project can be impacted by an appropriate dataset. An importantstep in this project was finding and selecting a suitable dataset that could be utilised inachieving the project’s objective. Thorough research was necessary since there was a needto find the Insurance dataset, which includes frequency and severity counterparts as well.After in-depth research, the French Motor Third-Party Liability datasets “freMTPL2freq”and “freMTPL2sev” included in the R package CASdatasets were found for claim frequencymodelling and claim severity modelling Dutang and Charpentier (2020). These datasetscontain the risk characteristics gathered over one year for 677,991 motor
CASdatasets were found for claim frequencymodelling and claim severity modelling Dutang and Charpentier (2020). These datasetscontain the risk characteristics gathered over one year for 677,991 motor vehicle third-partyliability insurance policies. While freMTPL2freq comprises the risk characteristics and theclaim number, freMTPL2sev provides the claim cost and the associated policy ID Dutangand Charpentier (2020).Tables 1 and 2 list the attributes of the fremMTPL2freq and freMTPL2sev datasetsalong with each feature’s description and data type. The freMTPL2freq dataset contains678,013 individual car insurance policies and for each policy there are 12 variables asso-ciated with it. The freMTPL2sev dataset includes 26,639 observations of claim amountsand corresponding policy IDs. Both datasets were merged, and the entire the analysis andmodel building was conducted on that.Table 1. Features from the freMTLP2freq dataset.Feature Description Data TypeIDpol The policy ID (can be linked with severity dataset) NumberClaimNb Number of claims during the given period IntegerExposure The exposure period for the policy NumberAreaIndicates the density value where the car driver lives; “A” for ruralarea to “F” for urban areasCharacterVehPower Power of the vehicle Integer Risks 2024 ,12, 62 6 of 28Table 1. Cont .Feature Description Data TypeVehAge Age of the vehicle, in years IntegerDrivAge The driver’s age, in years (in France, the legal driving age is 18) IntegerBonusMalusBonus/Malus: This ranges from 50 to 350. In France, a score of 100or less means Bonus, and above 100 means MalusIntegerVehBrand Vehicle brand CharacterVehGas Gas for the car, either regular or diesel CharacterDensityThe population density (measured as people per square kilometer)in the city where the car driver residesIntegerRegion France’s policy region (based on a classification from 1970 to 2015) CharacterTable 2. Features from the freMTLP2sev dataset.Feature Description Data TypeIDpol The policy ID (used to link to frequency data) NumberClaimAmount Amounts associated with claims Integer3.2. Data Cleaning and Pre-ProcessingData cleaning and pre-processing is one of the crucial steps as unprocessed andincomplete data cannot produce good results. It is widely acknowledged that the success ofeach data mining method is significantly influenced by the standard of data pre-processingMiksovsky et al. (2002). The data may, however, include mismatched data types, outliers,imbalances, missing numbers, etc., if they have not been properly pre-processed. Thepre-processing steps employed in the study are described here.After merging the frequency and severity datasets, they are thoroughly pre-processed.The steps taken during the initial pre-processing stage are listed below:• NA values for severity after merging the datasets are changed to zero, where the leftjoin of the frequency and severity datasets resulted in records with 0 claims having noequivalent severity records (leading to NA values).•
merging the datasets are changed to zero, where the leftjoin of the frequency and severity datasets resulted in records with 0 claims having noequivalent severity records (leading to NA values).• Duplicate rows (exact duplicates) have been removed, as those were deemed to bedata entry errors.• The dataset was filtered to eliminate any rows with claim amounts equal to zero.This is because removing claims with zero amount improves the performance of theseverity data. Also, within the severity dataset, the claim amounts up to the 97thpercentile were zeroes.• Upon observing the data, it was found that there was a substantial difference betweenthe value at the 99.99th percentile blue (974,159) and the 100th percentile (4,075,400).Claim amounts beyond the 99.99th percentile constitute approximately 14% of thetotal claim amount and can be considered as an extreme value. Hence, the claimamount at the 99.99th percentile is set as a threshold value and claim amounts abovethe 99.99th percentile are limited to the corresponding value at the same percentile.Two different approaches have been followed in the pre-processing for the models.Pre-processing steps for GLM and GBM are outlined as follows:• Convert the categorical variables ‘Area’ and ‘VehGas’ into factors and then into anumeric format.• Convert the variables ‘VehBrand’, and ‘Region’ into factors.• Convert the ‘Density’ variable into the numeric format and ‘BonusMalus’ into integerformat.• Variable ‘ClaimNb’ is modified into double format. Risks 2024 ,12, 62 7 of 28The pre-processing steps followed for ANN are as follows:• Perform Min-Max Scaling for numerical features ‘VehPower’, ‘VehAge’, ‘DrivAge’,‘BonusMalus’, ‘Density’.• Execute One-Hot Encoding for categorical features ‘Area’ and ‘VehGas’, ‘VehBrand’,and ‘Region’.• Variable ‘ClaimNb’ is modified into double format.3.3. Exploratory Data AnalysisExploratory data analysis (EDA) is an essential step in any research analysis as it aimsto examine the data for outliers, anomalies, and distribution patterns and helps to visualiseand understand the data Komorowski et al. (2016). In this research paper, most of thevisualisations would be bar charts due to the importance of studying the distribution of adataset prior to settling on a modelling technique. Detailed exploratory analysis is shownin the following subsections.3.3.1. Analysis of Risk FeaturesIt is necessary to examine the nature of the risk features, and if there are any irregulari-ties with their structure, these need to be corrected. This section describes the modificationsthat were made to the risk features after looking at their distribution.It was observed that there were 1227 entries in the dataset for which the exposureswere greater than 1. According to Dutang and Charpentier (2020), all observations weremade within one accounting year; hence, these exposures above 1 may have been the resultof a data error and were corrected to 1.The distribution of the feature variable ‘Exposure’ before
weremade within one accounting year; hence, these exposures above 1 may have been the resultof a data error and were corrected to 1.The distribution of the feature variable ‘Exposure’ before and after the cap is depictedin Figures 1 and 2.Figure 1. Histogram showing the distribution of feature ‘Exposure’ before capping.Figure 2. Histogram showing the distribution of feature ‘Exposure’ after capping. Risks 2024 ,12, 62 8 of 28Vehicle age is an important feature in the context of insurance analysis. Figure 3 showsthe number of observations and the frequency of claims per vehicle age before capping.Looking at the trend of the frequency and the count, it can be observed that the trend isvolatile and the data are scarce after the value of 20. Further inspection of the data showsthat 98.7% of the insured vehicles are captured within a range of 20 for vehicle age.Figure 3. Histogram showing the distribution of feature ‘VehAge’ before capping.Therefore, to ensure the integrity of this feature, a capping mechanism was put inplace that set the ’Vehicle Age’ values at a maximum of 20 years. Figures 3 and 4 show thedistribution of ‘Vehicle Age’ before and after the cap. Figure 4 shows that the trend aftercapping is more consistent.Figure 4. Histogram showing the distribution of feature ‘VehAge’ after capping.Figure 5 is a bar plot showing the number of observations and the frequency of claimsfor each value of driver age. The trend line is volatile after 85, but the plot shows furtherscarcity of the data points after 90. To ensure that we do not miss any underlying trends atboth ends of the data, we decided to cap “DrivAge” at 90, covering 99.9% of the dataset.Figures 5 and 6 show the distribution of ‘driver age’ before and after the cap. Risks 2024 ,12, 62 9 of 28Figure 5. Histogram showing the distribution of feature ‘DrivAge’ before capping.Figure 6. Histogram showing the distribution of feature ‘DrivAge’ after capping.Figure 7 is a bar plot showing the number of observations and the frequency of claimsfor different values of vehicle power in our dataset. The figure shows that our data arescarce after 13 and the trend line for the frequency becomes more volatile starting at thisvalue. We capped Vehicle Power at 13, which covers 99.2% of the dataset. Figures 7 and 8show the distribution of the feature before and after the capping.Figure 7. Bar plot showing the original distribution of the feature ‘VehPower’. Risks 2024 ,12, 62 10 of 28Figure 8. Bar plot showing the distribution of the feature ‘VehPower’ after capping.Figure 9 is a bar plot showing the frequency of claims and the number of observationsfor each value of BonusMalus. The figure shows an increase in the volatility in frequencyand an increase in the scarcity of the observations after the value of 150. We cappedBonusMalus at 150 covering 99.96% of the data, thus ensuring that both the Bonus (valuesless than 100) and Malus (values over 100) are captured when building the models. Figure 10shows the
We cappedBonusMalus at 150 covering 99.96% of the data, thus ensuring that both the Bonus (valuesless than 100) and Malus (values over 100) are captured when building the models. Figure 10shows the distribution of the BonusMalus values with respect to the frequency and numberof claims after capping.Figure 9. Barplot showing the observed number of claims with respect to the BonusMalus valuesbefore capping.Figure 10. Barplot showing the observed number of claims with respect to the BonusMalus valuesafter capping. Risks 2024 ,12, 62 11 of 28The feature ‘ClaimNb’ indicates that there are a few policies that have more than4 claims, with 16 being the most. These are rectified by setting them equal to 4, asthey are likely data errors given that the data were gathered over the course of a year.Figures 11 and 12 show the distributions of the feature ‘ClaimNb’ before and after cap-ping it.Figure 11. Bar plot showing the original distribution of the feature ‘ClaimNb’ before capping.Figure 12. Bar plot showing the distribution of the feature ‘ClaimNb’ after capping.A sensitivity study will be conducted to understand the impact of the capping of thevariables discussed in the section on the prediction. This will be done by training ourmodels on the dataset with the uncapped variables and on the one with the variables aftercapping. This will help us assess if the changes to the input variable will help in improvingthe predictive capability of the models. More details about this analysis are included inSection 5.3.3.2. Analysing the Relationship between VariablesAs part of analysing the data, the association between the age of the driver and thenumber of insurance claims was examined and is illustrated in Figure 13. From Figure 13,it is evident that the age range of 40 to 50 displayed a noticeably greater number of claimswhen compared to other age groups. As driver age increases up to the age group of40 to 50, there appears to be a rise in the number of claims, after which a decreasingtrend is observed. While a majority of the claims are made by middle-aged drivers, acomparison with Figures 5 and 6 reveals that younger drivers (aged 20 and below) presenta significantly higher risk due to a higher frequency of claims. Risks 2024 ,12, 62 12 of 28Figure 13. Relationship between Driver Age and Average Number of Claims.Figure 14 depicts the association between vehicle age and the number of claims. Forinstance, whereas vehicles with an age of 10 years reported 1841 claims, those with anage of 0 years reported 1335 claims. Comparing this with Figure 4, we observe that newvehicles, with a vehicle age of 0, are a much higher risk than vehicles with ages 1+. Eventhough automobiles with ages 1 and 2 showed a greater number of claims, we see brandnew vehicles being riskier, due to factors such as driver adjusting to the vehicle and itsnovelty.Figure 14. Relationship between vehicle age and the number of claims.Figure 15 depicts the relationship between vehicle power and
due to factors such as driver adjusting to the vehicle and itsnovelty.Figure 14. Relationship between vehicle age and the number of claims.Figure 15 depicts the relationship between vehicle power and the number of claims,and it can be inferred that vehicles with powers 5, 6, and 7 have a greater number of claims.Figure 15. Relationship between vehicle power and the number of claims. Risks 2024 ,12, 62 13 of 283.3.3. Correlation AnalysisAs part of the correlation analysis, the collinearity between the different features wasanalysed using Pearson’s correlation.Figure 16 shows the correlation plot after the analysis.Figure 16. Correlation Matrix for the Features of the Frequency Dataset.Some of the important findings from the correlation analysis are as follows:• BonusMalus and driver age have a moderately negative correlation ( −0.48), indicatingthat BonusMalus tends to decrease with increasing driver age.• Area and density are strongly correlated, having a strong positive correlation of 0.97.Due to the strong correlation between the features “Area” and “Density”, it is impor-tant to determine their applicability when developing models. As a result, three differentscenarios were considered while building models and are given below:Scenario 1: Developing the model with all the risk features, including ‘Area’ and‘Density’.Scenario 2: Developing the model with all the risk features excluding ‘Density’.Scenario 3: Developing the model with all the risk features excluding ‘Area’.These three scenarios were applied in the creation of frequency and severity modelsin all four techniques considered: GLM, GBM, ANN, and hybrid model. For both thefrequency and severity aspects of each technique, their performance was validated on the
IOP Conference Series: MaterialsScience and Engineering PAPER • OPEN ACCESSModel estimation of claim risk and premium formotor vehicle insurance by using BayesianmethodTo cite this article: Sukono et al 2018 IOP Conf. Ser.: Mater. Sci. Eng. 300 012027 View the article online for updates and enhancements. You may also like Mapping global research on agricultural insurance Shalika Vyas, Tobias Dalhaus, Martin Kropff et al. -Estimated value of insurance premium due to Citarum River flood by using Bayesian methodSukono, I Aisah, Y R H Tampubolon et al. -Likely cavitation and radial motion of stochastic elastic spheres L Angela Mihai, Thomas E Woolley and Alain Goriely -This content was downloaded from IP address 202.88.240.230 on 042024 at 09:47 1Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distributionof this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.Published under licence by IOP Publishing Ltd1234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 Model estimation of claim risk and premium for motor vehicle insurance by using Bayesian method Sukono1, Riaman2, E. Lesmana3*, R. Wulandari4, H. Napitupulu5*, S. Supian6 1,2,3,4,5,6Department of Mathematics, FMIPA, Universitas Padjadjaran, INDONESIA Email: sukono@unpad.ac.id; riaman_02@yahoo.com; man.msie@gmail.com; ratnawulandari624@gmail.com; napitupuluherlina@gmail.com; sudradjat@unpad.ac.id. *corresponding author: man.msie@gmail.com; napitupuluherlina@gmail.com Abstract. Risk models need to be estimated by the insurance company in order to predict the magnitude of the claim and determine the premiums charged to the insured. This is intended to prevent losses in the future. In this paper, we discuss the estimation of risk model claims and motor vehicle insurance premiums using Bayesian methods approach. It is assumed that the frequency of claims follow a Poisson distribution, while a number of claims assumed to follow a Gamma distribution. The estimation of parameters of the distribution of the frequency and amount of claims are made by using Bayesian methods. Furthermore, the estimator distribution of frequency and amount of claims are used to estimate the aggregate risk models as well as the value of the mean and variance. The mean and variance estimator that aggregate risk, was used to predict the premium eligible to be charged to the insured. Based on the analysis results, it is shown that the frequency of claims follow a Poisson distribution with parameter values  is 5.827. While a number of claims follow the Gamma distribution with parameter values p is 7.922 and  is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the
values p is 7.922 and  is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the prediction of the pure premium eligible charged to the insured is obtained, which amounting to IDR 2,722,290.82. The prediction of the claims and premiums aggregate can be used as a reference for the insurance company's decision-making in management of reserves and premiums of motor vehicle insurance. Keywords: Poisson distribution, Gamma distribution, Bayesian method, aggregate claims, premium calculation 1. Introduction Motor vehicle insurance is one of the important branches of non-life insurance type. Even in many countries, motor vehicle insurance is the largest total premium revenue earner. Indonesia is included in the country that gets the largest total insurance premium, from motor vehicle insurance. One of the driving factors causing the auto insurance industry to grow rapidly is the increasing number of motor vehicles in Indonesia every year. As a risk taker and recipient institution, insurance companies must be able to anticipate the risks if there are many claims. Because otherwise, would cause losses that could make the insurer bankrupty . In risk management, insurance companies must know the character of the risk. The purpose is to predict the losses event in the future . 21234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 In modeling the claim loss there are two important measures to be considered, namely the frequency of claims and the amount or severity of claims. The frequency of claims, as usual is properly modelled by using discrete distributions, including binomial, geometric, negative binomial and Poisson. Whereas, claim severity represent a large loss of an insurance claim, which is generally modelled by using a non-negative continuous distribution. For example the exponential and Pareto distributions, as well as the attached traits such as tail and quartile properties . To carry out the distribution parameter estimation of claims frequency and claims severity, according Jaroengeratikun and Bolstad , the Bayesian method can be applied to estimate the parameters of a model of the distribution of losses. The advantage of the Bayesian method is that the prior information of the parameters involved in the risk model needs to be determined first, then the sample data is observed to produce a posterior distribution. Similar studies ever undertaken include Migon et al. and Sukono et al. , conducting a risk analysis of health insurance claims, where the model parameter estimation uses the Bayesian method approach. Meanwhile, Eliasson applied the Bayesian method for the estimation of the credibility parameters for non-life insurance pricing on historical data of individual claims. In the analysis of
approach. Meanwhile, Eliasson applied the Bayesian method for the estimation of the credibility parameters for non-life insurance pricing on historical data of individual claims. In the analysis of non-life insurance, the risk distribution model of loss is an important concern for insurance companies. The risk distribution model is very useful in determining the premium to be paid by the insured to the insurer. Based on the previous explanation, this paper intends to apply how to assess a collective risk models in non-life insurance using Bayesian methods. The Bayesian method is used to estimate the parameters of the claim frequency model and the amount of the claim, in order to be used in the calculation of risk, and also to determine the premiums the insured must pay to the insurer. 2. Methology In this section the aims is to discuss the methodology, which includes the discussion of: collective risk model, parameter estimation, and principle of premiums calculation. We begin with a discussion of the collective risk model as follows. 2.1 Collective risk model This section aims to discuss the collective risk model. If we let the total frequency of claims for an insurance portfolio in the time period t is denoted by tN, then a large number of aggregate claims tS is as follows : tNii t tX S1,. (1) where i tX, is the i-th amount of claim which occurred during the time period t , or so-called amount of individual claims . The assumptions used in this model are:  Amount of claim i tX, is a non-negative random variables that are independent and identically distributed.  Claim frequency tN is a random variable and independent toward the i-th amount of claim i tX,. An important thing to do is to evaluate the frequency distribution model of claims, as well as the appraisal of the large distribution model of claims. After the estimator of the calim frequency distribution model and the estimator of the amount of claim distribution model have been obtained, it can then be used to determine the expected value ) (tS E and variance ) (tS Var. The expected value and the variance are the measures to estimate the total loss in the collective risk model. In the collective risk model, the expected and variance magnitude of total loss can be calculated using the following equation : ) (tS E )] | ( ... ( ( . [X E N E  31234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 ) ( ) (N E X E  . (2) )] | ( | ( ( . ( . ( :  nik i k x f x l x l12 1 2 1 ) ,... , | ( ) | ,..., , ( ) | (       . (4) Furthermore, estimation of parameter  is performed which gives the maximum value of (4). The estimation of these parameters in this paper is done by using Bayesian method. According to Boldstad and Gill , the Bayesian method is one of the parameter estimation methods that uses preliminary information on parameters , or so-called as the
done by using Bayesian method. According to Boldstad and Gill , the Bayesian method is one of the parameter estimation methods that uses preliminary information on parameters , or so-called as the prior distribution, and information from the observed data that has been obtained. After the sample information is taken, and the prior has been determined, the posterior distribution is determined by multiplying the prior with the sample information obtained from the likelihood. Where this prior is independent of its likelihood, and the posterior distribution is given by: 0(, ) ( ) ( | )(| ) ( ) ( | )()( ) ( | )f x f l xf x f l xf xf l x d    , with posterior distribution ) | (x f , prior distribution ) (f , and Likelihood function (| )lx. 2.3 Parameters estimation of claim frequency model This section aims to discuss the parameters estimation of the claim frequency model. Assumed that frequency of claim N is a discrete random sample that Poisson distributed with mean . The probability density function of the Poisson distribution is expressed as follows : !) (nen pn , n = 1,2,3,... Therefore, the likelihood function of the sample data which following the Poisson distribution is expressed by: tiin tinen lti i1!) | (1. According to Ntzoufras , the frequency of claims N which Poisson distributed, has a prior conjugate that Gamma-distributed with a probability density function expressed by:        0 ,) () (1ef . Posterior of parameter  can be expressed by multiplication between likelihood ) | (in l with prior ) (f . If the equation which not related to the parameter  is neglected, the proportional posterior equation is obtained as follows: ) ( 1211 ) ,..., , | (t nt e n n n fti i           . 41234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 So it can be said that posterior of the parameter  also Gamma distributed as expressed as : ) , ( ~ |1t n Gamma ntii i       . Therefore, if the posterior of the parameter  is Gamma Distributed, then the mean or expected value is obtained, expressed by: tnn Etiii  1ˆ) | ( , while the variance of the posterior is expressed by:  21 2ˆ) | (tnn Vartiii   . 2.4 Parameters estimation of claim amount model In this section it is intended to discuss about the parameters estimation of the claim amount model. In this paper the amount of claim X is assumed to be a continuous random sample that Gamma distributed with parameter p and . The probability density function that follows the Gamma distribution is expressed by : ) () (1pe xx fx p p. Furthermore, the likelihood function of sample data of claim amount which follows Gamma distribution is expressed by: ni ixnipinpni e xpp x l111) () , | (  . According to Ntzoufras , amount of claims which is Gamma distributed
data of claim amount which follows Gamma distribution is expressed by: ni ixnipinpni e xpp x l111) () , | (  . According to Ntzoufras , amount of claims which is Gamma distributed has a prior conjugate that Gamma distributed with a hyperparameter  and. The probability density function is expressed as follows:        0 ,) () (1ef . Posterior to parameter  can be expressed by multiplication between likelihood ) , | (p x li to prior ) (f . If the equation which not related to the parameter  is neglected, the proportional posterior equations are obtained as , : ) ( 11 ) , | (  ni ix pne x p f    . So it can be said that posterior ) , | (x p f is Gamma distributed with parameter     pn1 , and ,11niix  . Because of the posterior parameter  is Gamma distributed then the mean can be obtained, expressed by:   niiixpnx E1ˆ) | ( , while the variance of the posterior is expressed by: 51234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 212ˆ) | (  niiixpnx Var . 2.5 Premium calculation models This section aims to discuss the principles of premium calculation. Premiums are calculated on the basis of the principle of equality, or the principle of premium calculation. There are several principles used for the calculation of premiums, namely : a. Pure Premium Principle The calculation of pure premium principle is done by using following equation: ) ( ) (S E t p , (5) where tS ES Et) () ( . b. Expectation Value Principle The premium calculation of Expectation Value Principle is done by using following equation: ) ( ) 1 ( ) (tS E t p  , (6) where > 0 represents a premium loading factor. c. Variance Principle The premium calculation of Variance Value principle is done by using following equation: ) ( ) ( ) (S Var S E t p , (7) where tS VarS Vart) () ( and > 0 represents a premium loading factor d. Standard Deviation Principle The premium calculation of Standard Deviation Principle is done by using equations: ) ( ) ( ) (S Var S E t p  , (8) where > 0 . 3. Results and discussion In this section the aims is to study the results and discussions that include: analyzed data, an estimation of the claims frequency distribution model, the estimation of the claims amount distribution model, and the calculation of insurance premiums. It starts with a discussion of the claims frequency distribution, as follows. 3.1 Analyzed data The data used in this study, is the data of motor vehicle insurance claims on non-life insurance company PT. "X" with periods of 2015 to 2016. Data are grouped into claims frequency and claim amount. A summary of claims data is presented in Table 1. Tabel 1. Data of claim amount and frequency of claim No Interval Frequency 1 1,978,435 - 3,121,754 6 2 3,121,755 - 4,265,074 13 3 4,265,075 - 5,408,393 20 4
of claims data is presented in Table 1. Tabel 1. Data of claim amount and frequency of claim No Interval Frequency 1 1,978,435 - 3,121,754 6 2 3,121,755 - 4,265,074 13 3 4,265,075 - 5,408,393 20 4 5,408,395 - 6,551,712 12 5 6,551,713 - 7,695,031 8 6 7,695,031 - 8,838,351 4 7 8,838,352 - 9,981,670 7 Total 392,307,367 70 61234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 Furthermore, based on claims frequency data and the claims amount in Table 1, the most suitable or fit distribution is determined by using Easyfit 5.6 software, and it is tested statistically by using a goodness of fit test. 3.2 Estimating of the claim frequency distribution model In this section it aims to estimate the claim frequency distribution model. The steps include: identification of the distribution model, the estimation of the distribution model, and testing the significance of the distribution estimator.  Identification of the claim frequency data distribution model Identification of the claim frequency data distribution model is done by using statistical software EasyFit 5.6. The result of curve matching can be seen in Figure 1. Figure 1 Histogram and cumulative distribution function curve of claims frequency data Based on the results of curve fitting using Easyfit 5.6 software , the suitable model for the frequency of claims is obtained, that is the Poisson distribution. Estimated parameters using the maximum likelihood method 8333 , 5 . To determine the most suitable distribution, the Kolmogorov-Smirnov test was used on the Poisson distribution by using EasyFit 5.6 software . Based on the Kolmogorov-Smirnov test , the most suitable model for the claims frequency data is the Poisson distribution.  Parameters estimating of the claim frequency distribution model Based on Figure 2, the selected prior value is 001 . 0 and 001 . 0 , because the Bayesian estimation approximates the likelihood estimation of the parameter . Figure 2 Comparison of parameter  by estimation of Likelihood and Bayesian Because the claim frequency data is known, and the values of  and have been determined, then the posterior distribution of the parameters  can be expressed by: ) 001 . 12 , 001 . 70 ( ~ | Gamma ni . The statistical summary of the mean and variance of parameter  which was observed over the twelve months expressed in Table 2. 71234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 Table 2. Statistical summary of Bayesian estimation of parameter  which is obtained manually Parameter Posterior Posterior  Mean Variance Standard Deviation  70.001 12.001 5.832931 0.486037 0.697163 The OpenBUGS program is used to obtain a statistical summary of parameters , by simulating the sample data from the posterior
Variance Standard Deviation  70.001 12.001 5.832931 0.486037 0.697163 The OpenBUGS program is used to obtain a statistical summary of parameters , by simulating the sample data from the posterior distribution in several iterations. In this study, the three chains are used to simulate the posterior distribution of samples, wherein each chain was observed to test the convergence of the parameters visually by using trace plot. The iteration performed on each chain is 10,000 iterations. The statistical summary with the OpenBUGS program yields Bayesian parameter estimation , which comprises the mean and standard deviations expressed in Table 3. Table 3. Statistical summary of Bayesian estimation of parameter  which is obtained by OpenBUGS program Parameter Mean Standard Deviation 2.5% Percentil 97.5 % Percentil  5.827 0.6985 4.544 7.276 3.3 Estimating of the claims amount distribution model In this section the purpose is to evaluate the distribution model of the severity of the claim. The steps include: identification of the distribution model, the estimation of the distribution model, and testing the significance of the distribution estimator.  Identification of the claim amount data distribution model Identification of the distribution model of the claims amount made by using statistical software Easyfit 5.6. The result of curve fitting can be seen in Figure 3. Figure 3. Histogram and density function of claims severity data Based on the results of curve fitting using Easyfit 5.6 software , the obtained distribution model which suitable for amount of claims that Gamma distributed with parameters 9216 . 7p and 610 41437 . 1   . To determine the most suitable distribution, Kolmogorov-Smirnov test was used on the Gamma distribution by the help of EasyFit 5.6 software. Based on the Kolmogorov-Smirnov test, the most suitable model for amount of claim data is Gamma distribution.  Parameter estimating of the claim amount data distribution model Based on Figure 3, the selected prior value is 001 . 0 and 001 . 0 , since the Bayesian estimated value is close to the likelihood estimate of the parameter . 81234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 Figure 4. Comparison of parameter estimation of  by using likelihood and Bayesian Based on Equation (14), the known value of each hyperparameter  and  are: 513 . 554 001 . 0 ) 70 )( 9216 . 7 ( 1 ,         pn 001 . 367 , 307 , 392 001 . 0 367 , 307 , 39211 ,        niix A statistical summary of the mean and variance of parameter , which was observed over the twelve months expressed in Table 4. Table 4. Statistical summary of Bayesian estimation parameter  which is obtained manually Parameter Posterior  Posterior  Mean Variance Amount of Claim  554.513 392,307,367.001 1.41 10-6 3.6010-15 5,604,380.85 The posterior distribution
parameter  which is obtained manually Parameter Posterior  Posterior  Mean Variance Amount of Claim  554.513 392,307,367.001 1.41 10-6 3.6010-15 5,604,380.85 The posterior distribution explains the confidence level of the parameters contained in the sample data. Statistical summary of posterior parameter  shown in Tab el 5, which consists of the mean and standard deviation. These results are obtained by using statistical software OpenBUGS of three parallel chains and the iterations are done of 10,000 times. Three parallel chains with an iteration of 10,000 times in each chain are used to test the convergence of parameter , which is checked visually by using trace plot. Convergence test is done by observe whether the three chain are overlapping each other or not in the trace plot. Tabel 5. Statistical summary of Bayesian estimation parameter  which is obtained by OpenBUGS program Parameter ) (E Standard Deviation 2.5% Percentil 97.5 % Percentil E(X)  610 413 . 1 810 028 . 6 610 298 . 1 610 534 . 1 5,606,227.884 Based on Table 5, it is obtained that the mean of claims approved by the insurance company to be paid to the insured amounting to IDR5,606,227,884. 3.4 Estimating of the collective risk model This section the purpose is to estimate the collective risk model. Collective risk model is used for the calculation of premiums. Based on equation (1) the obtained number of aggregate claims is 367 , 307 , 392 tS . Based on equation (2) the obtained expectation of collective risk model is 88 . 489 , 667 , 32 ) 884 . 227 , 606 , 5 )( 827 . 5 ( ) (  tS E . While based on equation (3) the obtained variance of collective risk model is: 91234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 ) 4879 . 0 ( ) 884 . 227 , 606 , 5 ( ) 000 , 000 , 610 , 967 , 3 )( 827 . 5 ( ) (2 tS Var 000 , 000 , 900 , 453 , 38  . The expected value and the variance of the previous collective risk model, can be used to predict the premium to be paid by the insured. 3.5 The Calculating of Premium In this section the intention is to make a calculation of premiums amount to be paid by the insured to the insurance company. The amount of premium depends on the value of the claim frequency and amount of the aggregate claim (collective). In this study, the assumed charge factor at a premium for is 0.1 and is 0.1.  By using equation (17), the obtained calculation of pure premium is: 823 . 290 , 722 , 2 ) ( t p .  By using equation (6), the obtained calculation of the expected value premium principles is: 906 . 519 , 994 , 2 ) ( t p .  By using equation (7), the obtained calculation of variance premium principles is: 000 , 000 , 452 , 320 ) ( t p .  By using equation (8), the obtained calculation of standard deviation premium principles is: 783 . 301 , 901 , 2 ) ( t p . 4. Conclusions In this paper, we discussed the
000 , 452 , 320 ) ( t p .  By using equation (8), the obtained calculation of standard deviation premium principles is: 783 . 301 , 901 , 2 ) ( t p . 4. Conclusions In this paper, we discussed the estimation of claims risk model and motor vehicle insurance premiums by using Bayesian methods approach. Based on the data processing, the frequency of claims is Poisson distributed with the value of estimated parameter 827 , 5 , and total claim amount of Gamma distributed with estimated parameter 9216, 7p and 41347 , 1610 . By using both distribution estimators the aggregate claim distribution was formed, then the aggregate claim is obtained on non-life insurance companies amounting to IDR32,667,489.88 with a variance of IDR38,453,900,000,000.00. The value of these expectations and variance is used by the insurance company as a reference in determining the premium value. In this paper, the prediction of the pure premium to be paid by the insured to the insurance company is IDR2,722,290.82, the prediction of the expectation premium is IDR2,994,519.91, the prediction of the variance premium is IDR320,450,000,000.00, and the prediction of standard deviation premium is IDR2,901,301.78. Based on some predictions of such premiums amount, the insurance company can determine the yearly reasonable and affordable premium amount for the insured. Acknowledgment Further thanks to the Rector, Director of DRPMI, and Dean of FMIPA, Universitas Padjadjaran, which has a grant program of the Academic Leadership Grant (ALG) under the coordination of Prof. Dr. Sudradjat, and a grant program of Competence Research of Lecturer of Unpad (Riset Kompetensi Dosen Unpad/RKDU) under the coordination of Dr. Sukono, which is a means to increase research activities and publications to researchers at Universitas Padjadjaran. References Al-Noor N H and Bawi S F 2015 Bayes Estimators for the Parameters of the Inverted Exponential Distribution under Symmetric and Asymmetric Loss Function . J. of Natural Sciences Resarch , 5(4) pp 45-52 Azevedo F C D, Oliveira, T A, and Oliveira, A 2016 Modeling Non-Life Insurance Price For Risk Without Historical Information REVSTAT – Stat. J. 14 2 April 2016 pp 171–192 Bolstad W M 2007 Introduction to Bayesian Statistics Second Edition (America: A John Wiley & Sons. Inc) Dickson D C M 2005 Insurance Risk and Ruin (Cambridge: Cambridge University Press) 101234567890 ‘’“”4th International Conference on Operational Research (InteriOR) IOP PublishingIOP Conf. Series: Materials Science and Engineering 300 (2018) 012027 doi:10.1088300012027 Djuric Z 2013 Collective Risk Model In Non-Life Insurance Economic Horizons 15 2 May- Agustus 2013: pp 167-175 Eliasson D 2015 Bayesian Credibility Methods for Pricing Non-life Insurance on Individual Claims History Working Paper Postal address: Mathematical Statistics Stockholm University, SE-106 91, Sweden. E-mail: daniel@danieleliasson.com. Gamerman, et al. 2006 Markov Chain Monte Carlo: Stochastic Simulation
Working Paper Postal address: Mathematical Statistics Stockholm University, SE-106 91, Sweden. E-mail: daniel@danieleliasson.com. Gamerman, et al. 2006 Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed) London : Chapman & HallCRC) Inanoglu H and Jacobs H 2009 Models for Risk Aggregation and Sensitivity Analysis: an Application to Bank Economic Capital J. of Risk and Financial Management 2(2009), pp 118-189 Jaroengeratikun U, Bodhisuwan W, and Thongteeraparp A 2012 A Bayesian Inference of Non-Life Insurance Based on Claim Counting Process with Periodic Claim Intensity Open J. of Statistics No 2 April 2012: pp 177-183 Lumbanbatu R M 2015 Modeling Insurance Claims using a Compound Distribution. International J. of Science and Research (IJSR) pp 1505-06 Manimaran R, Balakrishnan V, and Narayanan V 2014 A Collective Risk Theory in Reinsurance Int. J. of Innovation in Science and Math 2 Issue 1, ISSN (Online): 2347– 9051 Copyright © 2014 IJISM, All right reserved pp 151- 153 Migon et al. 2006 Bayesian Analysis Of A Helath Insurance Model . J. of Actuarial Practice 13 62 pp 61-80 Nino S and Paolo C G 2010 A Collective Risk Model for Claims Reserve Distribution 29th Int. Congress of Actuaries - ICA 2010, Cape Town – March 7-12th 2010 pp 1-22 Ntzoufras I 2009 Bayesian Modeling Using WinBUGS: An Introduction USA:Wikey. Sukono, Suyudi M., Islamiyati F, and Supian S 2017 Estimation Model of Life Insurance Claims Risk for Cancer Patients by Using Bayesian Method. IOP Conf. Series: Materials Science and Engineering 166 (2017) 012022 doi:10.1088166012022. pp. 1- 9.
An examination of the relationship between vehicle insurance purchase and thefrequency of accidentsYung-Ching Hsua, Pai-Lung Choub, Yung-Ming Shiuc,*aInvestigation Section, Civil Service Ethics Of fice, Kaohsiung City Government, 2, Sihwei 3rd Road, Lingya District, Kaohsiung, Taiwan, ROCbDepartment of Risk Management and Insurance, National Kaohsiung First University of Science and Technology, Kaohsiung, Taiwan, ROCcDepartment of Risk Management and Insurance, College of Commerce, National Chengchi University, Taipei, Taiwan, ROCarticle infoArticle history:Received 27 June 2016Accepted 19 August 2016Available online xxxKeywords:Vehicle accident occurrencesInsurance coverageModerating effectsVehicle ageabstractThe relationship between insurance, occurrences of road traf fic accidents (RTAs) and general traf fic safetyhas received growing attention over recent years among academics, industry practitioners and gov-ernment policymakers. Using data on vehicle damage insurance in Taiwan, we examine whether driverswith higher insurance coverage are more likely to be involved in RTAs, and whether the relationship ismoderated by the gender of the insured party as well as the age of both the vehicle and the insured party.Using a probit regression, we identify a positive relationship between coverage and claims and find thatan insured party with a poor claims history has a higher probability of being involved in RTAs. Althoughourfindings provide support for adverse selection theory, when considering the moderating effect ofvehicle age, the positive relationship between coverage and claims becomes insigni ficant; indeed, vehicleage weakens the positive in fluence of coverage on claims. Our results suggest that drivers with a poordriving record purchasing higher insurance coverage for their new vehicle tend to be involved in moreRTAs and submit more insurance claims.©2016 College of Management, National Cheng Kung University. Production and hosting by ElsevierTaiwan LLC. All rights reserved.IntroductionAnnual statistics on road traf fic accidents (RTAs) in the US showthat the number of RTA fatalities in 2010 fell to an all-time low sincerecords were first collated in 1950 ( NHTSA, 2010 ); however, this isnot necessarily the case in other countries, particularly thosecountries that are rapidly becoming motorized. A very recent studyreported that in 1990, road traf fic injuries (RTIs) were ranked theninth leading cause of the ‘global disease burden ’(Chekijian et al.,2014 ), and indeed, the World Health Organization has forecastedthat by 2030, RTIs will become the fifth leading cause of this burden(WHO, 2012 ).Public attention to the remarkably unacceptable death toll fromRTAs has grown over recent decades, with the ‘Global Status Reporton Road Safety, 2013 ’reporting that the total number of RTA fatal-ities across the world currently stands at 1.24 million per year(WHO, 2014 ). As a result, governments across the world have beenplacing considerable effort into enhancing
total number of RTA fatal-ities across the world currently stands at 1.24 million per year(WHO, 2014 ). As a result, governments across the world have beenplacing considerable effort into enhancing road safety by imposingrelevant laws and investing in highway capital ( Nguyen-Hoang &Yeung, 2014 ), whilst vehicle manufacturers have also placedemphasis on improvements in vehicle design, with the commongoal of reducing the frequency and severity of traf fic accidents.However, regardless of the amount of effort expended, RTAs cannotbe completely avoided; thus, the goal must essentially be to reducetheir frequency and magnitude.In order to finance the monetary losses arising from RTAs,drivers may consider purchasing vehicle insurance, such ascovering for physical damage or losses resulting from collisions,theft or other unfortunate events. It is, however, a legal require-ment in many countries, including Taiwan, for drivers to have validliability insurance coverage protecting the insured party againstany legal liability arising from accidents causing bodily injury andlocate ©2016 College of Management, National Cheng Kung University. Production and hosting by Elsevier Taiwan LLC. All rights reserved.Asia Paci fic Management Review xxx (2016) 1 e8Please cite this article in press as: Hsu, Y.-C., et al., An examination of the relationship between vehicle insurance purchase and the frequency ofaccidents, Asia Paci fic Management Review (2016), purchase of vehicle insurance and RTA occurrences. In the presentstudy, we aim to investigate which theory is supported by data oncollision insurance in Taiwan. Secondly, we investigate the effectson this relationship that may be attributable to the gender of theinsured party and the age of both the vehicle and the driver. To thebest of our knowledge, no prior empirical research has been un-dertaken with the aim of examining the possible moderating effectson the relationship between purchase of vehicle insurance andoccurrences of RTAs.The debate continues within the literature with regard towhether ‘adverse selection ’theory is actually supported by theavailable empirical data. As noted by Karagyozova and Siegelman(2012) , the term ‘adverse selection ’wasfirst coined in the nine-teenth century, with the related theory having subsequently beenproposed and formalized by Akerlof (1970) and Rothschild andStiglitz (1976) . Adverse selection, which arises from asymmetricinformation, in an insurance market describes the fact that insur-ance buyers possess residual private information about their riskthat insurers lack even after risk classi fication ( Shi, Zhang, &Valdez,2012 ). According to adverse selection theory, high-risk individualsare more likely to purchase higher levels of insurance coverage, as aresult of which, these individuals will tend to have a higher prob-ability of experiencing a loss. Riskier drivers will therefore tend tobuy higher coverage and will also tend to submit more claims. Thisinference gives rise
will tend to have a higher prob-ability of experiencing a loss. Riskier drivers will therefore tend tobuy higher coverage and will also tend to submit more claims. Thisinference gives rise to a positive correlation between the amount ofinsurance coverage and the ex post occurrence of the insured risk.Such a positive relationship is found between coverage andclaims in several of the prior studies ( Puelz &Snow, 1994; Shi et al.,2012; Li, Liu &Peng, 2013 ); however, there are also numerous ex-amples of other studies where no evidence of adverse selection isdiscernible ( Chiappori &Salani C19eroux &Vanasse, 2001; Saito, 2006 ). The theory of ‘propitious selection ’(or ‘advantageous selection ’) is one particular argument proposedin these studies as the means of explaining the absence of anyrelationship between coverage and claims. Similar to adverse se-lection, propitious selection is also a choice made by the insuredparty, although in this case, the choice is advantageous to theinsurer.Another major difference between the two theories is that thoseadvocating propitious selection theories argue that there are fac-tors relating to risk aversion that are important in determininginsurance coverage purchase, factors that are not taken intoconsideration in adverse selection theory. These factors include, forexample, the age and gender of the insured party and the age of thevehicle, each of which are considered to have an impact on thelikelihood of incurring losses.Unlike adverse selection theory, the argument in support ofpropitious selection theory posits that individuals who are highlyrisk-averse would be more likely to purchase greater insurancecover and take more physical precautions, thereby suggesting thatsuch individuals will have a lower probability of being involved inRTAs (e.g., De Meza &Webb, 2001; Hemenway, 1990, 1992 ).Hemenway (1990) found weak evidence that automobile renterswho wore their seat belt tend to buy collision damage waiver in-surance offered by car rental companies. Consistent with the theoryof propitious selection, Hemenway (1992) found that drivers whopurchase vehicle liability coverage are less likely to engage in drinkdriving and are more likely to engage in risk reduction behaviors.Based upon this argument, we would expect to find a negativecorrelation between coverage and claims.It is worthwhile to note that adverse selection theory and pro-pitious selection theory make different predictions about therelationship between coverage and claims. The former predicts apositive relationship, while the latter predicts a negative relation-ship. The former argues that risky drivers would be more likely topurchase more insurance. Since risky drivers have a higher likeli-hood of being involved in RTAs, a positive relationship betweencoverage and claims can therefore be observed. Conversely, thelatter argues that risk-averse drivers tend to purchase more insur-ance. Because these drivers are less likely to be involved in RTAs,
and claims can therefore be observed. Conversely, thelatter argues that risk-averse drivers tend to purchase more insur-ance. Because these drivers are less likely to be involved in RTAs, anegative relation is thus observed.Like adverse selection theory, moral hazard theory also predictsa positive relation between coverage and claims. In the case ofadverse selection, drivers are assumed to have private informationabout their risk type and preference, conditional on the insurer'srisk classi fication of the buyer of insurance. Those with privateinformation that they are high risk or risk-loving would purchasemore insurance than those with private information that they arelow risk ( Finkelstein &McGarry, 2006 ). However, moral hazardtheory argues that once drivers purchase more insurance, they tendto be riskier and are more likely to be involved in RTAs. Spindler,Winter and Hagmayer (2013) indicated that moral hazard dealswith ‘hidden action ’, while adverse selection concerns ‘hidden in-formation. ’In our paper, we attempt to distinguish between moralhazard and adverse selection by examining the correlation betweenprior and future claims ( Abbring, Chiappori, Heckman &Pinquet,2003; Cohen &Siegelman, 2010 ).Using 1998 e1999 data from the Auckland Car Crash InjuryStudy, Blows, Ivers, Connor, Ameratunga and Norton (2003) foundthat uninsured drivers were more likely to suffer car crash injuriesthan insured drivers, a finding which provides some support for thetheory of propitious selection, insofar as less risk-averse driversmay choose not to purchase insurance and yet have a greaterlikelihood of being involved in RTAs.It should be noted that under both the adverse selection andpropitious selection arguments, it is assumed that insured partieshave an informational advantage over insurers. In the case ofadverse selection theory, this advantage is revealed by riskier in-dividuals purchasing higher levels of insurance coverage and hav-ing a greater likelihood of being involved in accidents; conversely,under the propitious selection argument, the advantage is revealedby risk-averse individuals purchasing higher levels of insurance,but with a lower probability of being involved in accidents.The prior studies on traf fic safety have tended to concentrateprimarily on the prevention of road accidents, including a reductionin total injuries and fatalities, by addressing issues such as the useof seat-belts ( Farmer &Wells, 2010 ), drinking and driving ( Sloan,Chepke &Davis, 2013 ), speeding ( Ardeshiri &Jeihani, 2014 ), thewearing of helmets ( Bonander, Nilson &Andersson, 2014 ) and theuse of child restraints ( Romano &Kelley-Baker, 2015 ).It appears, however, that little research, if any, has been un-dertaken with speci fic focus on the ways in which insurance pur-chase behavior may in fluence the probability of RTAs from theperspective of traf fic safety; the present study therefore aims to fillthis gap in the literature. The prior insurance literature also
pur-chase behavior may in fluence the probability of RTAs from theperspective of traf fic safety; the present study therefore aims to fillthis gap in the literature. The prior insurance literature also sug-gests that there are other factors, such as risk aversion, that maywell offset the positive correlation between coverage and claims(Hemenway, 1990, 1992; Shi et al., 2012 ). In contrast to thesestudies, in the present study we argue that the absence of a rela-tionship between coverage and claims is possibly due to themoderating effect arising from certain factors known to the insurer,such as the age of the vehicle.Using data on vehicle damage insurance contracts in Taiwan, wefind that drivers with higher levels of insurance coverage and apoor driving history are more likely to submit claims, therebyindicating the existence of adverse selection. However, we also findthat this positive relationship between coverage and claims isweakened by the age of the vehicle, thereby suggesting a weakercoverage-claims correlation for the older vehicles.Y.-C. Hsu et al. / Asia Paci fic Management Review xxx (2016) 1 e8 2Please cite this article in press as: Hsu, Y.-C., et al., An examination of the relationship between vehicle insurance purchase and the frequency ofaccidents, Asia Paci fic Management Review (2016), The remainder of this paper is organized as follows. The back-ground on Taiwanese vehicle insurance market is provided in sec-tion The taiwanese vehicle insurance market . The data andmethodology adopted for the analyses undertaken in this study aredescribed in section Data and methodology, followed in sectionEmpirical results by the presentation and interpretation of ourempirical results. Finally, the conclusions drawn from this study arepresented in section Empirical results .The taiwanese vehicle insurance marketIn 2009, vehicle insurance is the major line of business for non-life insurance companies in Taiwan, accounting for 49.36% of thetotal premiums received by the whole non-life insurance industry(Taiwan Insurance Institute, 2015 ). Vehicle insurance includes threemain types of insurance: vehicle damage, theft and liability insur-ance. In this paper we use data on vehicle damage insurancebecause it is voluntary in Taiwan. We do not use data on vehicleliability insurance because drivers are required to purchasecompulsory vehicle liability insurance which only covers medicalexpenses associated with human injury and death. The maximumamount of compensation is approximately equivalent to US$73,000. Drivers who consider this amount inadequate wouldvoluntarily purchase more coverage on vehicle liability. Driverswho wish to have insurance on damage in flicted to other drivers'cars need to buy voluntary vehicle liability insurance.The aim of this paper is to examine the relation between in-surance purchase and incidences of traf fic accidents. It is thereforeobvious that data on compulsory insurance cannot be used toexamine this relation. Since all drivers
the relation between in-surance purchase and incidences of traf fic accidents. It is thereforeobvious that data on compulsory insurance cannot be used toexamine this relation. Since all drivers in Taiwan need to purchasecompulsory vehicle liability insurance, using voluntary vehicle li-ability is also inappropriate and the relation between insurancepurchase and the frequency of traf fic accidents would be less sta-tistically signi ficant.It is worthwhile to note that the bonus-malus system, which isbased on experience rating, has been effective in Taiwan since 1996.Under this system, the vehicle damage insurance premium rate ofpolicyholders is determined by the insured- and insured vehicle-coefficients. The insured-coef ficients include gender-age and pastclaims coef ficients, while the insured vehicle-coef ficients arerelated to the age and cubic capacity of the insured vehicle.The bonus-malus system is symmetrical. To be more speci fic,the bonus element of this system encourages safe driving, while themalus element penalizes bad driving record. Drivers with no claimsin the previous year have their premiums discounted by 20 percent. Those without claims for two (three) consecutive years havetheir premiums discounted by 40 (60) per cent. Conversely, driverswith two (three, four, …) claims during the past three years willhave their premiums increased by 20 (40, 60, …) percent.Data and methodologyDataThe data on vehicle damage insurance contracts used in thisstudy is obtained from the second largest non-life insurer inTaiwan. This insurer's market share was 11.17 per cent in terms ofgross premiums written. Our study sample comprises of a total of726 observations, with the contracts, covering the 2009 policy year,being written in the 2009 year and effective for the following 12-month period. Vehicle damage insurance provides protection forthe insured vehicle if it is damaged in an accident.A two-by-two contingency chart depicting the two-way fre-quency of the relationship between coverage and claims ispresented in Table 1 . As the table shows, 32.20 per cent (66521) of those policyholders with lowcoverage levels.Of all drivers who had submitted claims, 40.49 percent (66563) of drivers who had purchased higher in-surance coverage did not submit claims. The Chi-squared (c2) sta-tistic of 15.576 ( p¼0.000) clearly indicates the existence of apositive relationship between coverage and claims, with this pre-liminary finding suggesting that insured parties with higher in-surance coverage are more likely to submit claims.MethodologyAs indicated in Cohen and Siegelman (2010) , an intuitive way ofexamining the relationship between coverage and claims is to runthe probit regression as follows:Claim i¼fðCoverage iþCViÞþei (1)where Claim iis a dummy variable which is equal to 1 if policyholderisubmits one or more claims; otherwise 0; and Coverage irefers tothe coverage choice of policyholder i; this variable is a dichotomousvariable which is equal to 1 for high
equal to 1 if policyholderisubmits one or more claims; otherwise 0; and Coverage irefers tothe coverage choice of policyholder i; this variable is a dichotomousvariable which is equal to 1 for high coverage and 0 for lowcoverage; CViis a set of control variables (to be de fined below); eiisa classic error term. We assume that the error term is independentof all explanatory variables and has the standard normaldistribution.Several related studies within the literature have identi fied anumber of variables that are found to have some effect on thedependent variable; hence, these variables are included within theregressions as the control variables in the present study ( Saito,2006; Kim, Kim, Im &Hardin, 2009; Shi et al., 2012 ). These con-trol variables include the age of the insured party, the gender of theinsured party, the age of the insured vehicle, the ‘bonus-malus ’coefficient, the location of the insured vehicle, whether the vehicleis imported or locally produced, the vehicle capacity and whetherthe contract is a renewal contract.Typically, young drivers are less experienced in driving and thusmore likely to be involved in RTAs ( Shi et al., 2012 ). Moreover, fe-male drivers are also considered to be safer than their male coun-terparts. The ‘bonus-malus ’coefficient represents thepolicyholder's past claim history. It is expected that drivers with aworse claim history are more likely to file claims. Prior studies (e.g,Paefgen, Staake, &Fleisch, 2014 ) also argue that geographical zonesare related to the probability of filing claims. It is obvious thatdriving in a busier area is more likely to have traf fic accidents.Cohen (2005) found that the value of insured vehicle is positivelyrelated to the likelihood of having claims. Since domestically pro-duced vehicles generally are cheaper than imported vehicles, weTable 1Two-way frequency of variables.Claims Totals01Coverage 0 424 97 5211 139 66 205Totals 563 163 726Note: Chi-squared statistic ¼15.576 with statistical signi ficance at the 1 per centlevel ( p¼0.000).Y.-C. Hsu et al. / Asia Paci fic Management Review xxx (2016) 1 e8 3Please cite this article in press as: Hsu, Y.-C., et al., An examination of the relationship between vehicle insurance purchase and the frequency ofaccidents, Asia Paci fic Management Review (2016), expect that domestically produced vehicles are less likely to havetraffic accidents. Since the value of vehicles generally depreciatesvery quickly, drivers are less motivated to renew the insurancecontracts on their vehicles. Those who do renew their insurancecontracts generally are more risk-averse than those who do not. It isthus expected that policyholders that renew their contracts are lessprone to submitting claims. All of the variables used in this study,along with their de finitions, are presented in Table 2 . It is worth-while to note that these variables are observable to the insurer andused in pricing insurance policies.As stated above, a positive relation between coverage and
are presented in Table 2 . It is worth-while to note that these variables are observable to the insurer andused in pricing insurance policies.As stated above, a positive relation between coverage and claimscould indicate the existence of adverse selection or moral hazard.One approach for effectively distinguishing between adverse se-lection and moral hazard is to examine the correlation betweenprior and future claims ( Abbring et al., 2003; Cohen &Siegelman,2010 ). Under adverse selection, the prior claims records ofinsured parties re flect their level of riskiness, which would basicallyremain unchanged after either an initial insurance purchase or anincrease in the level of insurance coverage; that is, insured partieswith a history of numerous claims would continue to be prone toaccidents in the future. Thus, we would expect to find a positiverelationship between prior and future claims. In the case of moralhazard, since insured parties have less incentive to take precautionsto prevent accidents from occurring, we would expect to findhigher coverage leading to a lower level of caution, which, in turn,indicates a higher probablility of claims.We also consider potential cross-effects on the dependent var-iable; one important dimension potentially moderating the rela-tionship between coverage and claims is the age and the gender ofthe insured party. These two variables are also referred to as therisk-aversion variables, essentially because they are related to thelevel of risk aversion of the insured party. Individuals are generallyconsidered to become more risk averse with an increase in age(Morin, &Suarez, 1983 ), and indeed, women are generallyconsidered to be more risk averse than men ( Borghans, Golsteyn,Heckman, &Meijers, 2009 ).Both the theories of adverse selection and propitious selectiontake into consideration the risk aversion of the insured party. It isworthwhile to note that under adverse selection theory risk aver-sion is assumed constant across individuals, while it is not underpropitious selection theory. A risk-averse person prefers a certainamount of wealth to a risky situation yielding the same expectedwealth, so more risk-averse drivers would require a higher riskpremium to induce them to accept the risk ( Harrington &Niehaus,2004 ). Thus, ceteris paribus , there will be a greater likelihood of amore risk-averse individual purchasing higher insurance coveragewhilst also taking greater precautions to prevent any occurrence ofloss.The risk-aversion variables in this study, both of which areincluded in Equation (2), are the Insured_Ageand Gender . We expecttofind that the positive relationship between coverage and claims,as predicted by the theory of adverse selection, may be weakenedby the risk-aversion variables, perhaps even becoming negative. Wealso anticipate that the positive relationship posited by adverseselection theory will again be weakened, or indeed, become anegative relationship, for older insured drivers and female
negative. Wealso anticipate that the positive relationship posited by adverseselection theory will again be weakened, or indeed, become anegative relationship, for older insured drivers and female drivers.We further predict that the age of the insured vehicle may wellprove to moderate the relationship between vehicle damage in-surance coverage and claims; our prediction is primarily basedupon the argument that old cars are generally driven by safe drivers(Shi et al., 2012 ). Since vehicles are insured at actual cash value(which is equal to replacement cost minus any depreciation), oldervehicles tend to have smaller coverage than newer vehicles.Moreover, as the vehicle depreciates over time, one tends to pur-chase less coverage for it ( Shi et al., 2012 ). We therefore furtherexpect to find the age of the insured vehicle weakening the positiverelationship between coverage and claims.Equation (2), which includes the three moderator variablesreferred to above, is expressed as follows:Claim i¼fðCoverage iþCoverage iC2Gender iþCoverage iC2Insured _Age Product of Coverage and Insured_Age.Gender Gender of the insured party; Male ¼1;Female ¼0.Coverage C2Vehicle _Age Product of Coverage and Vehicle_Age.Bonus-Malus Bonus-Malus coef ficient of the insured party.RN Insured vehicle located in NorthernTaiwan ¼1; otherwise 0RC Insured vehicle located in CentralTaiwan ¼1; otherwise 0RS Insured vehicle located in SouthernTaiwan ¼1; otherwise 0RTH Insured vehicle located inTaoyuan Asia Paci fic Management Review xxx (2016) 1 e8 4Please cite this article in press as: Hsu, Y.-C., et al., An examination of the relationship between vehicle insurance purchase and the frequency ofaccidents, Asia Paci fic Management Review (2016), the spurious effects of the decision to purchase high/low coverageon the likelihood of claims, effects that are potentially attributableto self-selection bias, we adopt the Heckman two-stage estimationapproach for our analysis ( Heckman, 1979 ). It should be noted thatin our regressions on the Coverage and Claim variables, we use thesame set of control variables as those used in the prior literature(e.g., Shi et al., 2012 ).Heckman (1979) argued that bias in the estimated regressioncoefficients is attributable to an omitted variable, which is referredto as the inverse Mills ratio; thus, the first stage of our analysisinvolves running a probit regression of the treatment variable (theCoverage variable in our analysis) on the control variables includedin Equation (1)in order to obtain the inverse Mills ratio. The secondstage involves running a regression on the outcome variable (theClaim variable in our analysis); in this stage, we include the esti-mated inverse Mills ratio as an additional regressor to correct forthe potential problem of selectivity. Details on the use of theHeckman two-step estimation approach can be found in Johnstonand DiNardo (1997) .Empirical resultsUnivariate analysisThe summary statistics of all of the variables used in the
on the use of theHeckman two-step estimation approach can be found in Johnstonand DiNardo (1997) .Empirical resultsUnivariate analysisThe summary statistics of all of the variables used in the presentstudy are reported in Table 3 . As the table shows, of our total sampleof insured drivers, approximately 28 per cent were found to havepurchased high levels of insurance coverage, whilst approximately22 per cent were found to have submitted claims. A typical insureddriver in our sample is a male, aged approximately 45 years, whilstthe typical insured vehicle is approximately four years old.The correlation matrix between each of the variable coef ficientsis presented in Table 4 , from which a positive and highly signi ficantrelationship is clearly discernible between Coverage and Claim atthe 1 per cent level; this indicates that drivers who have purchasedhigher levels of insurance coverage are more likely to submitclaims, thereby indicating preliminary evidence of adverse selec-tion. The interaction terms between Coverage and the moderatorvariables, Insured_Age,Gender and Vehicle_Age, are also found to besignificant, at least at the 1 per cent level. Table 4 therefore providesan initial understanding of the likely interaction between Coverageand the moderator variables.Multivariate analysisThe estimation results using binomial probit regressions forModels (1) and (2), respectively relating to Equations (1) and (2) ,are presented in Table 5 . The Chi-squared ( c2) statistic in Model (1)is found to be 99.654 ( p-value ¼0.000) whilst that in Model (2) is111.016 ( p-value ¼0.000). Statistical signi ficance is found for bothmodels at the 1 per cent level, thereby indicating that the fittedmodels are better than a null model without explanatory variables.The McFadden pseudo R2values are 0.129 in Model (1) and 0.144 inModel (2).The regression results from Equation (1) reveal that thecoverage variable is positive and statistically signi ficant at the 5 percent level, thereby indicating a greater likelihood of an insuredparty with higher coverage submitting claims. This finding providespreliminary evidence of the presence of asymmetric information;however, the positive relationship between Coverage and Claimmay also suggest the existence of either adverse selection or moralhazard.In the present study, the Bonus-Malus variable, which representsthe prior claims history of the policyholder, is found to be positiveand highly signi ficant, thereby indicating that an insured party witha poor claims history has a higher probability of being involved inRTAs ( Shi et al., 2012; af Wåhlberg, 2012 ). This evidence, togetherwith our prior finding of a positive relationship between Coverageand Claim , would seem to indicate the existence of adverse selec-tion ( Cohen &Siegelman, 2010 ).In order to examine the moderating effects of the Insured_Age,Gender and Vehicle_Age variables on the relationship betweenCoverage and Claim , interaction terms are subsequently added intoour
2010 ).In order to examine the moderating effects of the Insured_Age,Gender and Vehicle_Age variables on the relationship betweenCoverage and Claim , interaction terms are subsequently added intoour regressions. The adverse effects potentially arising from theproblem of multi-collinearity are alleviated by mean centering theInsured_Age and Vehicle_Age variables, although the dummy vari-ables, such as Coverage and Gender are not mean centered.The results obtained from Equation (2)reveal that Coverage re-mains positive, albeit insigni ficant ( p-value ¼0.113), which sug-gests that the sign ficant results reported earlier will probably havebeen weakened by the moderator variables, Insured_Age,Genderand Vehicle_Age. We also find that the coef fieient on the interactionterm between Coverage and Vehicle_Age is negative and highlysignificant at the 1 per cent level, thereby suggesting that theimpact of Coverage onClaim is weaker for insured drivers with oldervehicles.Consistent with Bair, Huang and Wang (2012) ,w efind that theprobability of the occurrence of RTAs is potentially affected by thelocation of the insured vehicle, with three out of the four variableson the insured vehicle location being found to be signi ficnt at the 1per cent level. We also find evidence to show that the cubic capacityof the insured vehicle has an impact on the likelihood of occur-rences of RTAs.The estimation results based upon the Heckman two-stageapproach are presented in Table 6 . As noted earlier, to obtain theinverse Mills ratio, we first of all run a probit regression on theCoverage variable along with several explanatory variables, fromwhich we find that the c2statistic is 63.459 and highly signi ficant,t
India to overtake Japan as Asia's 2nd largest economy by 2030: IHS India is likely to overtake Japan as Asia's second-largest economy by 2030 when its is also projected to surpass that of Germany and the UK to rank as world's No.3, IHS Markit said in a report on Friday. Currently, India is the sixth-largest economy in the world, behind the US, China, Japan, Germany and the United Kingdom. "India's nominal GDP measured in USD terms is forecast to rise from USD 2.7 trillion in 2021 to USD 8.4 trillion by 2030," IHS Markit Ltd said. "This rapid pace of economic expansion would result in the size of Indian GDP exceeding Japanese GDP by 2030, making India the second-largest economy in the ." By 2030, the would also be larger in size than the largest Western European economies of Germany, France and the UK. "Overall, India is expected to continue to be one of the world's fastest-growing economies over the next decade," it said. The long-term outlook for the Indian economy is supported by a number of key growth drivers. "An important positive factor for India is its large and fast-growing middle class, which is helping to drive consumer spending," IHS Markit said, forecasting that the country's consumption expenditure will double from USD 1.5 trillion in 2020 to USD 3 trillion by 2030. For the full fiscal year 2021-22 (April 2021 to March 2022), India's real GDP growth rate is projected to be 8.2 per cent, rebounding from the severe contraction of 7.3 per cent year-on-year in 2020-21, IHS Markit said. The Indian economy is forecast to continue growing strongly in the 2022-23 fiscal year, at a pace of 6.7 per cent. The rapidly growing domestic consumer market as well as its large industrial sector have made India an increasingly important investment destination for a wide range of multinationals in many sectors, including manufacturing, infrastructure and services. The digital transformation of India that is currently underway is expected to accelerate the growth of e-commerce, changing the retail consumer market landscape over the next decade. "This is attracting leading global multinationals in technology and e-commerce to the Indian market," according to the report. "By 2030, 1.1 billion Indians will have internet access, more than doubling from the estimated 500 million internet users in 2020." The rapid growth of e-commerce and the shift to 4G and 5G smartphone technology will boost home-grown unicorns like online e-commerce platform Mensa Brands, logistics start-up Delhivery and the fast-growing online grocer BigBasket, whose e-sales have surged during the pandemic, IHS Markit said. "The large increase in FDI inflows to India that has been evident over the past five years is also continuing with strong momentum in 2020 and 2021," it said. This, it said, is being boosted by large inflows of investments from global technology MNCs such as Google and Facebook that are attracted to India's large domestic consumer market. Being one of the world's
said, is being boosted by large inflows of investments from global technology MNCs such as Google and Facebook that are attracted to India's large domestic consumer market. Being one of the world's fastest-growing economies will make India one of the most important long-term growth markets for multinationals in a wide range of industries, including manufacturing industries such as autos, electronics and chemicals, and services industries such as banking, insurance, asset management, healthcare and information technology. Also Read:
Union Budget 2022 highlights: Boost for various sectors, but middle class taxpayers left in lurch again Finance minister presented the , the fourth budget of Modi 2.0, today. There were a host of measures for a number of sectors, aimed at boosting growth amid high & rising inflation and continuing Covid uncertainties. There, however, were remarkably few changes to the personal income tax structure in a year that had seen demands from various quarters for some sort of relief or another in times of a pandemic. Among today's assortment of announcements, the decision to tax receivers of digital asset transfers at a high 30% caught some serious attention. Aside from that, the announcement of a digital rupee was another big news item in a budget that saw no major populist giveaways. Here's a sector-wise detailed reading of the various measures the Finance Minister announced today. Economy Expenditure and deficit & other key numbers Taxes Duties on Industry Jobs Infra & manufacturing Steps on digital currency Housing & urban planning MSMEs & startups Agri Electric Vehicles Education & skilling Finance & inclusion Healthcare Telecom Women & Children Ease of Business & living Defence Transportation including Railways Climate & Net Zero Travel Mix & Match measures Also Read:
Merdeka Battery soars 20% on debut as Indonesia's EV push draws investors Indonesian nickel company Materials surged as much as 20% in its trading debut on Tuesday after raising 8.75 trillion rupiah (USD 591.82 million) in the country's third largest initial public offering this year. The strong debut, coming less than a week after Harita Nickel's robust listing, signals investors' growing appetite for Indonesia's nickel processing sector, which is part of the electric vehicle (EV) supply chain. Southeast Asia's largest economy has been stepping up its efforts to become a major player in the global by tapping its nickel reserves, the largest in the world. Battery opened at 805 rupiah on the Indonesian stock exchange, compared with its price of 795 rupiah. The shares later jumped to 955 rupiah, before steadying at 935 rupiah. The local benchmark stock index rose 0.52%. "The big interest in this IPO signals that investors are very optimistic about the prospects of nickel downstreaming and development of EV battery business that will be done by MBM," Merdeka Battery's CEO Devin Ridwan said at the market debut ceremony in Jakarta, referring to Merdeka Battery as MBM. Trimegah Bangun Persada, also known as Harita Nickel, debuted on the local stock exchange last Wednesday with a positive showing after raising 10 trillion rupiah. The stock has climbed 17% versus its IPO price. European automaker Volkswagen is planning to partner with Indonesian companies, including the Merdeka group, to source EV battery raw materials, a cabinet minister said on Sunday. When asked about the potential collaboration, Devin told reporters that the company is discussing the structure of its partnership with Volkswagen but there was no concrete agreement yet. IPO PRICING Merdeka Battery, a unit of Merdeka Copper Gold, priced its IPO at the top range of between 780 rupiah and 795 rupiah a share earlier this month following strong interest from sovereign wealth funds, insurance firms and long-only local and overseas investors. It plans to use the IPO proceeds for loan repayment, working capital and capital expenditure, according to the IPO prospectus. UBS and Macquarie are the joint global coordinators of the IPO, and are joint bookrunners along with Bank of America and HSBC. The domestic underwriters are Indo Premier Sekuritas and Trimegah Sekuritas Indonesia. Merdeka Battery's IPO size ranked as the country's third largest this year after the listings of Harita Nickel and Pertamina Geothermal Energy, a unit of Indonesian state energy firm Pertamina. The Southeast Asian nation is one of the world's hottest IPO markets this year as the government seeks to privatise some state-owned enterprises. Indonesian share sales quadrupled to some USD 828 million in the first quarter, Refinitiv Eikon data showed, from USD 202 million in the year-earlier period. Other upcoming IPOs in Indonesia this year include Pertamina Hulu Energi, the upstream arm of Pertamina, that could raise up to
Eikon data showed, from USD 202 million in the year-earlier period. Other upcoming IPOs in Indonesia this year include Pertamina Hulu Energi, the upstream arm of Pertamina, that could raise up to USD 2 billion, and state-owned fertiliser company Pupuk Kalimantan Timur that could raise USD 500 million.
FPIs sold $14 bln equities in Q1 2022, DIIs bought matching amount: Report New Delhi: (FPIs) have sold around $14 billion worth of equities in the secondary market in the quarter that ended in March 2022, said Kotak Securities in a report. FPIs offloaded stocks in banks, diversified financials and IT services. On the flip side, (DIIs) bought around $14 bn worth of equities during the March quarter. Accordingly, DIIs bought banks, diversified financials and IT services stocks. "FPI holding in the BSE-200 index declined to 22.4 per cent in the March quarter from 23 per cent in the December quarter. DII holding in the BSE-200 Index increased to 14 per cent in the March quarter from 13.4 per cent in the December quarter," the brokerage house said in the report. Highest increase in stake by the FPIs are seen in Restaurant Brands Asia, Mindspace REIT and Lemon Tree Hotels, Mutual Funds in Coforge, Metropolis Healthcare and Equitas Small Finance Bank, and BFIs in Tata Steel, PVR and Restaurant Brands Asia. BFIs include banks, financial institutions and insurance companies. On the other hand, highest decrease in stake by the FPIs was witnessed in Metropolis Healthcare, Jubilant Foodworks and Motherson Sumi Systems, MFs in RBL Bank, Voltas and City Union Bank, and BFIs in Hindustan Aeronautics, Godrej Industries and Bata India. "FPIs were overweight on banks and diversified financials; underweight on consumer staples and pharmaceuticals, whereas MFs were overweight on banks and capital goods; underweight on consumer staples, IT services and oil, gas & consumable fuels."
Indian fintech firms will handle $1 trillion in assets by 2030: report Funding in Indian touched $7.8 billion in 2021 and the industry is expected to handle $1 trillion worth assets by 2030, according to a new report by venture capital firm and . Indian fintech firms are expected to clock $200 billion in revenue by 2030, the report added. At present, most of the funding has been skewed towards digital payment firms. Of the total $7.8 billion raised by the sector last year, $3.5 billion (roughly 44%) went to fintech payment firms, it said. As India’s payment landscape evolves, technologies such as near-field communication (NFC) payments, soft point-of-sale (PoS) penetration, and central bank digital currency (CBDC) use cases are expected to drive new innovations in this space, according to the report. Fintech models are evolving to be ‘pervasive’ across most sectors, and have found applications across segments such as agriculture, supply chain and ecommerce, among others, Chiratae Ventures cofounder and vice chairperson TC Meenakshi told ET. “Financial services are the plumbing infrastructure. Our belief is that fintech will become more and more horizontal, propping up in the intersection of agri-tech, proptech, and B2B supply chain. The regulatory environment for the sector is also developing and we see a lot more openness from the government and regulator towards fintechs to integrate with the economy,” he said. “We believe that fintechs will drive $1 trillion in assets under management by 2030, across lending, insurance, wealth management and neo-banking,” Sundaram added. According to the report, half of the trillion-dollar AUM expected by 2030 will be powered by digital lending fintech firms, with a total of $515 billion worth of assets managed by startups in the space. Almost $1.2 billion was poured into Indian lending fintech firms in 2021, up almost 71% from the previous year. The Reserve Bank of India is keenly observing the activities of lending startups as it looks to tighten regulations for the sector. In June, the central bank from being loaded through credit lines. It is now looking to bring in further clarity on operating guidelines for the sector through its much-awaited digital lending guidelines. “India is recognised as a strong fintech hub globally and is increasingly becoming a talent destination for fintech businesses,” said Rajiv Memani, chairman and managing partner, EY India. According to the report, models such as first loss default guarantee (FLDG), which lead to significant operational risk, will give way to new models such as co-lending, which will help mitigate risk. By 2030, the report added, wealth-tech fintech companies will be managing $237 billion worth of investor AUM, while insurtech and neo-banking will contribute to $88 billion and $215 billion, respectively. India was home to 21 fintech unicorns – privately held companies with a valuation of $1 billion or more - as of March 2022.
White-collar job openings stay above pre-Covid levels in April The market in India is on a sustained recovery path, as a pickup in business activity and rising attrition levels are prompting companies to ramp up hiring, leading to the number of white-collar job positions in April exceeding the average pre-pandemic monthly numbers. According to data collated from LinkedIn and top company job boards by specialist staffing firm Xpheno, and shared exclusively with ET, the number of active job openings in April was 305,000, up 53% from a year earlier. This is also higher than the pre-Covid monthly average vacancies of 230,000-240,000 and almost at the same level as March jobs number of 310,000. Job market experts attributed this to an overall positive business sentiment, increase in consumption, pentup demand and greater confidence in growth prospects. “FY23 has begun on a sustained positive note on hiring action across key sectors. The year-onyear growth figure of 53% shows the extent of recovery and growth the job market has seen since previous waves of the pandemic,” said Kamal Karanth, cofounder, Xpheno. Economists and industry experts said even as companies keep an eye on rising inflation and the impact of the Ukraine-Russia war on input costs, hiring is likely to remain mostly unaffected as business sentiment continues to remain robust. “The job market will be getting even better as companies prepare for future growth and the economy comes out of pandemic-related restrictions. Hiring activity will remain buoyant going ahead,” said Madan Sabnavis, chief economist, Bank of Baroda. “High inflation or the war in Ukraine will not affect hiring decisions,” Sabnavis said. “Inflation is high not just in India, but it is a global phenomenon, and everyone knows that it will correct at some point of time. The war is also an external condition, and we are facing the collateral effects in terms of shortage of some commodities and rise in input cost. However, all this will not have an impact on hiring as companies do prospective planning for the next few years,” he added. The IT services sector, which has been leading the hiring action, recorded 76% YoY growth in April. Other top sectors included global capability centres (GCC), banking, financial services and insurance (BFSI), retail, healthcare, electrical and electronics manufacturing, automotive and industrial automation. “India as a market is very attractive from a commercial perspective. It also has one of the largest university systems with both quality and quantity of talent,” said Mohit Kapoor, global chief technology officer at consumer researcher Nielsen IQ, which is planning to hire more than 5,000 people in India. Also Read:
Ford extends production at TN plant till July-end Auto major Ford has extended its production schedule till July-end against the earlier June-end as the company is continuing discussions with the employees who are protesting against the severance package offered to them, the company said on Friday. The factory located at Maraimalai Nagar on the outskirts of the city has been witnessing labour unrest since May 30 over the compensation offered by the management. To a PTI query, the company spokesperson said, "Pursuant to the employee cascade on June 9, 2022, the company received a positive response, with a vast number of employees consenting to support production in parallel to continuing discussions on the severance package on offer." "Over 50 per cent of permanent employees have been supporting production since June 14 and the company has decided to extend production till the end-July 2022. All the employees continuing to support production in July will get wage protection", the official said. Several employees resorted to a protest on May 30 at the factory. The company, after halting production, resumed operation in double shifts from June 14. Ford had said earlier that the severance package would only be available to those employees who resume production from June 14 and support the company in completing the production schedule. To those employees who continue to be on strike, the company warned of 'a loss of pay' as per the Certified Standing Orders remain in effect from June 14. "We look forward to having a constructive dialogue with employees and union representatives to explain the details and benefits of the severance package under the supervision of the labour department," the official said. According to Ford, the company has offered severance packages for approximately 115 days of gross wages for each completed year of service (of an employee) which was significantly higher than the statutory severance package. The cumulative package accounts for an ex gratia amount equivalent to 87 days of last drawn gross wages as of May 2022, a fixed Rs 50,000 for every completed year of service benefits equivalent to a lump sum amount of Rs 2.40 lakh and current medical insurance coverage until March 2024. "The cumulative amounts will be subject to a minimum amount of Rs 30 lakh and a maximum cap of Rs 80 lakh," it had said. The employees had staged a protest seeking better pay soon after the car major, in September 2021, announced that it would stop vehicle production at its two plants -- Sanand in Gujarat and Maraimalai Nagar near Chennai in -- as part of its restructuring exercise. Recently, Tata Motors announced the signing of a tripartite pact with Ford and the Gujarat government to acquire the American auto major's vehicle manufacturing unit at Sanand.
Over 2,250 startups added in 2021, raised USD 24.1 bn in 2021: Report NEW DELHI: More than 2,250 were added in the year 2021, over 600 more than what was added in the previous year, a report by Nasscom and Zinnov said on Friday. The study titled 'Indian Tech Start-up Ecosystem: Year of The Titans' said with rising investor confidence, startups leveraging deep-tech, and tapping unexplored talent pool, the Indian tech startup base continues to witness steady growth. As per the report, startups raised USD 24.1 billion in 2021, a two-fold increase over pre-Covid levels. In comparison to 2020, there was a 3X increase in number of high-value deals (deals more than USD 100 million), demonstrating investor confidence with a pool of active angel investors of more than 2,400 and a readiness to take significant risks, it said. While the US remains the leading source of foreign direct investment in startups, is also growing. About 50 per cent of the deals had at least one India-domiciled investor, it added. The report said the saw a 2X gain in cumulative valuation from 2020 to 2021, with an estimate of USD 320-330 billion, demonstrating the sector's development and recovery throughout the pandemic. It added that more than USD 6 billion has been raised via public markets with 11 startup IPOs in 2021. In the last decade, the ecosystem has played a key role in growing direct and indirect job opportunities, providing 6.6 lakh direct jobs and more than 34.1 lakh indirect jobs. The industries that saw the most net new job creation were banking, financial services and insurance (BFSI), edtech, retail and retail tech, foodtech, SCM (supply chain management) and logistics and mobility. On the back of internet commerce, freelancers, and service industries, indirect jobs have also recovered, it said. "The performance of the Indian startup ecosystem in 2021 has proved the resilience and dedication being put by multiple startups across segments. The ecosystem has grown immensely and positioned itself as a vital contributor to the growth of India's digital economy," Nasscom President Debjani Ghosh said. She added that with record-breaking funding, an increase in the number of unicorns, jobs being created in the near term, the Indian startup ecosystem's future looks even brighter going ahead in 2022. "When compared to the UK, US, Israel, and China, 2021 has been an outstanding year for the Indian startup ecosystem, with the highest growth rate in terms of deals, both in seed stage and late-stage funding, and number of startups," Zinnov CEO Pari Natarajan said. Natarajan added that Indian firms have done an outstanding job of selling into global markets, notably in categories such as global SME and developer ecosystem. "The fact that Indian startups are digitally native serves as an excellent model for digital native enterprises throughout the world. I believe that Indian startup ecosystem is just getting started. It's Day Zero and we are super excited about the potential