top of page

Credit-scoring in India

Evaluating The Suitability Of The Regulatory Framework Under The Credit Information Companies (Regulation) Act, 2005



-Shobhit Shukla*


 

A. Introduction


In November 2021, a working group constituted by the Reserve Bank of India (“RBI”), observed in its report that numerous fintech service-providers in India process troves of data relating to consumers of financial services, without their meaningful consent. The WG consequently expressed its concern for protection of consumers' personal data and articulated the need to clearly specify the obligations of such service-providers, including those engaged in credit-scoring.


The publication of the report followed an amendment (“Amendment”) of the Credit Information Companies (“CICs”) Regulations, 2006, framed by the RBI under the CIC (Regulation) Act, 2005 (the “Act”) – the Amendment expanded the definition of ‘specified users’ to include eligible ‘entities engaged in the processing of information for the support or benefit of credit institutions’. Shortly thereafter, the RBI laid down certain eligibility criteria for registration of such entities by CICs, as ‘specified users’. Resultantly, eligible service-providers, including those engaged in consumers’ credit-appraisal for the benefit of credit institutions, will have direct access to credit information stored with CICs, subject to the requirements under the Act.


This post first provides an overview of the operation of credit-scoring models and the kinds of datasets employed by such models to assist lenders in credit assessment. It discusses the regulatory concerns posed by algorithmic credit-scoring models that employ ‘alternative data’ to assess creditworthiness. In light of this discussion, it examines the institutional framework for collection and processing of credit information in India under the Act and the impact of its extension to eligible credit-scoring service-providers (as ‘specified users’). It argues that while extension of the obligations under the Act to previously unregulated entities may be a step in the right direction, the Act is ill-equipped to address central concerns associated with algorithmic credit-scoring. It contends that a comprehensive personal data protection (“DP”) framework is more suitable avenue for addressing such concerns, than a horizontal extension of the Act.


B. Credit-scoring mechanisms: Traditional and Alternative


In developing nations, expanding access to formal credit is a key policy-priority towards the achievement of financial inclusion. This focus has been manifestly clear in the Indian context as well, with the RBI recognising access to adequate credit as a crucial pillar in its strategy for financial inclusion. To enable financial institutions to facilitate equitable distribution of credit, a fair and transparent credit information ecosystem is understood as essential.


In India, CICs licensed by the RBI act as storehouses of credit information for credit institutions. In addition to collecting and sharing credit information relating to consumers (such as details of outstanding loans including types of loans and maturity dates, contingent liabilities and history of repayment/defaults) within the parameters of the Act, CICs also process such information to generate credit-scores. Historically, credit institutions have relied on the credit-scores generated by CICs to assess the creditworthiness of applicants.


However, recent years have witnessed a rapid expansion of consumers’ digital footprints, accelerating the global generation, consumption and storage of digital data to tremendous levels. In parallel, advancements in machine-learning and data-analytics have enabled the processing of such data at lower costs and with higher degrees of sophistication. As a result, the range of datasets utilised for formulating credit-scores has expanded significantly beyond financial data (as conventionally understood), to include ‘non-traditional’ or ‘alternative’ data. In the context of credit-scoring, the term is used to designate any data that falls outside the ambit of financial data – it may include data relating to non-credit payments (such as retail payments and utility bills), social-media profiles, employment history, telephone records and web-browsing history. The usage of alternative data (typically, in conjunction with ‘conventional’ financial data) in the formulation of credit-scores (‘alternative credit-scoring’ or “ACS” hereinafter) enables lenders to perform credit assessment in respect of individuals with meagre or no financial history – thus, the shift from traditional credit-scoring to ACS promises greater financial inclusion.


This shift has also been discernible in India, with lenders increasingly relying on ACS-models developed by fintech service-providers to assess creditworthiness, as the WG also observed. According to market practice (at least prior to the Amendment coming into force), a credit institution enters into an arrangement with such service-provider, under which the institution provided the service-provider access to credit information stored with CICs. The service-provider aggregates such information along with alternative data collected from a variety of sources, processes such aggregated data using its algorithm and generates a credit-score in respect of the applicant. The credit institution subsequently relies on this credit-score to decide whether to extend a credit product to such applicant, and if yes, the key terms of such credit product.




C. Regulatory concerns associated with ACS


Although ACS has undoubted potential to enable expansion of credit to thin-file applicants, the leveraging of troves of disparate alternative data by digital means amplifies concerns associated with any algorithmic decision-making process.


First, whereas traditional credit-scoring models relied only on financial data (which is typically available in structured from), ACS mechanisms also deploy unstructured data in the formulation of credit-scores. This can include highly unorganised and disaggregated datasets, drawn from a variety of sources, including social media profiles and telephone records. On one hand, the integration of a greater number of datasets can provide a detailed picture of a consumer’s consumption patterns; however, on the other hand, the deployment of unstructured datasets can also increase the likelihood of inaccuracy as well as of fallacious correlations being drawn by the underlying algorithm, between a particular data-inputs and creditworthiness.


Second, as the number of data inputs increases, so does the complexity and opacity of the algorithms by which such data inputs are processed to arrive at the credit-score. Further, since the algorithms constitute the intellectual property of the service-provider, they are typically protected from disclosure. Without access to the algorithms that form the basis of the credit-scores, applicants neither have the means to understand the reasoning behind their credit-assessment, nor any avenue to challenge the accuracy of the underlying data, or the conclusions drawn from it.

Third, data used for credit-scoring typically constitutes personal data, which is, in certain cases, highly sensitive in nature. In jurisdictions without a mature privacy-protecting framework, such data may be collected, processed and disclosed by service-providers without the meaningful consent of the applicant, constituting a clear threat to her informational privacy. Further, aggregation of vast volumes of data by service-providers makes their IT-systems additionally suitable as targets for breaches of data security.


Fourth, while the shift away from manual credit appraisal eliminates certain flagrant forms of human bias, algorithms can introduce and perpetuate more insidious forms of discrimination. As illustrated by Hurley and Adebayo, if the data employed to train the underlying algorithm is not adequately representative, decisions made by the algorithm may systematically disadvantage under-represented sections and cause financial exclusion. Further, an algorithm may contain embedded biases in the form of prima facie neutral data inputs, which (either by themselves or in conjunction with other characteristics) operate as substitutes for sensitive characteristics such as race, caste and sex. Moreover, given the complexity of ACS-algorithms, it may be extremely cumbersome for the service-provider or a regulator to identify and weed out biases embedded in it.


Thus, with the evolution of traditional credit-scoring to ACS, regulation of the flow of credit information and protection of consumers’ personal data has become increasingly crucial and simultaneously challenging.


D. The Indian credit information ecosystem and the regulatory framework


In India, the credit information ecosystem, as envisaged under the Act, consists of three sets of actors:


(a) ‘Credit institutions’, including banking companies, NBFCs and companies otherwise engaged in the provision of credit in any manner.


(b) CICs, which are registered entities authorised by the RBI to operate as such. Every credit institution is required to become a member of at least one CIC.


(c) Specified users’, i.e., entities which are authorised to obtain credit information from CICs, within the parameters set out under the Act. In addition to CICs and credit institutions, ‘specified users’ also include certain sectorally-regulated entities such as insurance companies and telecom service-providers.


A credit institution is required to submit credit information in respect of its borrowers to a CIC(s). CICs collate such information and make it available to specified users, which may use the information to, inter alia, make effective credit decisions, evaluate credit risk, minimise adverse selection of customers and judge the creditworthiness of applicants. Further, CICs also assign credit-scores to prospective borrowers and furnish the same for the benefit of credit institutions which are its members.


The Act recognises that credit information constitutes personal data, which merits privacy protection – hence, it lays down certain restrictions and safeguards in connection with its collection, processing and disclosure by CICs, credit institutions and specified users (collectively “Participants” hereinafter).


Prior to the notification of the Amendment, while such safeguards applied to CICs, credit institutions and certain specified users, they did not extend to ACS service-providers, since such entities did not fall within the definition of ‘specified users’. Broadly, the processing of information by such entities was constrained largely by the provisions of the Information Technology Act, 2000 (“IT Act”) and in particular, the Information Technology (Reasonable Security Practices and Sensitive Personal Data or Information) Rules, 2011 formulated thereunder.


The extension of the obligations under the Act to eligible service-providers (and in particular, those engaged in ACS) certainly achieves some of the regulatory objectives outlined by the WG. First, it provides an avenue for service-providers to access credit data, under a more robust framework for its protection than the IT Act. Parallelly, it brings an array of service-providers, which hitherto accessed such data in largely unregulated domains, under the supervision of the RBI. Second, it requires them to devise policies and procedures (or modify existing ones) to ensure that information concerning an individual is collected and processed with the individual’s knowledge, for specified purposes and subject to certain privacy and transparency safeguards. Third, it promises consumers greater control over their personal data, by guaranteeing them the right to access the credit information processed by service-providers and to rectify such information where necessary. In the event of rejection of an application for credit, applicants can access the underlying credit information report relied upon in making the decision. Fourth, it provides an enforcement mechanism supervised by the RBI to deal with any contravention by any CIC or eligible service-provider.


While the extension of the framework to specified users, and by implication, to ACS service-providers is noteworthy, it raises the question of whether the Act represents the correct pathway for addressing the regulatory concerns associated with ACS.


E. Limitation of the framework under the Act – need for comprehensive data protection


From the above discussion on the prevalent credit-scoring mechanisms in India, it is clear that the imperative to regulate ACS is motivated by concerns relating to collection, aggregation and algorithmic processing of personal data.


In simple terms, leveraging of troves of personal data raises concerns relating to control of individuals over their personal data, transparency in the processing and use of such data and prevention of unauthorised access to such data. The RBI has sought to address these concerns by extending the framework applicable to CICs, to otherwise unregulated service-providers.


However, the Act, enacted almost two decades ago, is limited in its conception of credit information as well as the credit information ecosystem. It envisages credit information as conventional financial information, carrying clear correlations with the creditworthiness of the individual. It also conceptualises CICs as repositories of such information, furnished to them by credit institutions. This fails to capture the reality of ACS models, which leverage disparate varieties of alternative personal data, ranging from financial data to behavioural and psychometric data, for algorithmic analysis. As a consequence of this conceptual limitation, the framework of rights and obligations set out under the Act also suffers from deficiencies. This post argues that a comprehensive personal DP framework, with a specialised DP authority at its helm, will be better suited than the Act to address the concerns raised by contemporary ACS mechanisms.


First, a DP framework seeks to create various categories of personal data and allows for varying levels of protection to be extended to such data. For instance, the Indian (draft) Data Protection Bill, 2021 (“DP Bill”) recognises certain forms of personal data as ‘sensitive personal data’ and ‘critical personal data’ and envisages a differentiated set of limitations on the collection and processing of such data. In the context of ACS, instead of treating ‘credit information’ as a monolithic class of data, a DP framework would recognise that such data consists of varieties of data meriting differentiated levels of protection – accordingly, it would secure, a differentiated set of safeguards corresponding to the desired level of protection for such data, within a rights-based paradigm.


Second, DP frameworks of the nature of the DP Bill, establish a fiduciary relationship between any entity which decides the means and purpose of processing personal data (‘data fiduciary’) and the individual to whom such data relates (‘data principal’). This fiduciary relationship allows meaningful control of the data principal over her personal data. In the context of ACS, such control would translate into a requirement on ACS service-providers and/or credit institutions to procure affirmative consent from an applicant, for collection and processing of her personal data for credit assessment. It would also obligate financial service-providers (and ACS-service-providers, in particular) to disclose to the applicant the personal datasets sought to be relied on for the purpose as well as the risks associated with such processing. Notably, the Act does not stipulate any such requirement for the data principal’s consent or for disclosure of risks.


Third, DP frameworks are founded on certain guiding principles that seek to protect and enhance the informational privacy of individuals. One such principle relates to purpose-limitation, i.e., personal data must be collected only for specified purposes and processed in a manner compatible with such purposes. The Act also, in fact, includes an enunciation of this principle – it prohibits the collection and processing of personal data by any Participant, except for the purposes relating to its function under the Act or in relation to its capacity as an employer. However, due to the vague formulation of the functions of ‘specified users’ (i.e., ‘the processing of information for the support or benefit of credit institutions’), the Act falls significantly short of the purpose-limitation principle, as envisaged under DP frameworks. This vague formulation allows service-providers to collect personal data, without specific disclosure, for an extremely wide range of purposes falling within the meaning of ‘support or benefit of credit institutions’. In effect, it allows any specified user (including ACS service-providers) to access credit information for credit assessment and subsequently use such information to assist lenders operating in regulatory grey-zones in behaviourally targeting vulnerable customers with high-risk credit products.


Fourth, DP frameworks provide individuals an array of informational rights in connection with their personal data – this includes the right to access such data and to rectify or update such data when required. The Act also allows an applicant to access her credit information by making a request to the relevant credit institution – however, this right is limited only to credit information and the corresponding obligation to furnish such information only applies to CICs. Added to the complexity of ACS-algorithms (discussed further below), this lack of transparency leaves an applicant no direct avenue to either access the personal data inputs collected by specified users (including ACS service- providers) or its sources.


Fifth, the Act does envisage any form of framework for algorithmic transparency to mitigate the risks posed by credit-scoring models. It fails to impose any affirmative obligation on ACS-service-providers to enhance the interpretability of their algorithms or to minimise any potentially discriminatory inputs or outcomes. Notably, in jurisdictions with mature DP frameworks, entities engaged in algorithmic processing of personal data are required to submit their algorithms for regulatory audits or impact-assessments in certain circumstances; in other jurisdictions, drawing from a ‘human-in-command approach’, the DP framework also provides an individual the right to seek human intervention, where algorithmic processing results into adverse outcomes. The global recognition of risks relating to algorithmic processing of personal data has also reflected in developments in India – in addition to the concerns articulated by the WG, the Joint Parliamentary Committee on the DP Bill recommended the inclusion of a provision that seeks to enhance fairness and transparency in algorithmic processing. While concerns relating to transparency must eventually be weighed against service-providers’ intellectual property rights, it is clear that a DP framework with graded transparency and disclosure requirements can be a vehicle towards mitigating the adverse consequences of algorithmic decision-making, including in the financial-services sector.


F. Conclusion


The discussion on prevalent credit-scoring mechanisms in India illustrates that such mechanisms pose significant concerns regarding the protection of individuals’ personal data, their informational privacy and algorithmic accountability. Following the expression of such concerns in the WG’s report, the extension of the Act to entities processing data for the benefit of credit institutions, certainly represents a step towards mitigating some of these concerns. However, an examination of the framework under the Act suggests that it is anachronistic and inadequate to mitigate the concerns by ACS mechanisms. A comprehensive DP framework would address such concerns more holistically – instead of placing entry barriers in the nature of the eligibility criteria set out by the RBI, such a framework would provide a legal basis for regulated flow of credit information in the financial ecosystem. Simultaneously, by setting out overarching principles applicable to the collection of personal data by any entity (regardless of whether it is otherwise regulated by any sectoral regulator), such a framework would also provide more robust protection to consumers’ informational privacy.

 

*The author is a Research Fellow at Shardul Amarchand Mangaldas & Co. His work relates to intersections between regulatory policy, finance and technology. The author thanks Shilpa M. Ahluwalia for her inputs. The views expressed are the personal views of the author.

Comments


Recent

bottom of page