SEBI’s Proposed Guidelines for Responsible AI/ML Usage: Implications for Market Participants

On June 20, 2025, the Securities and Exchange Board of India (“SEBI”) released a consultation paper proposing comprehensive guidelines for the responsible usage of Artificial Intelligence (“AI”) and Machine Learning (“ML”) in Indian securities markets. This regulatory initiative aims to optimize benefits while minimizing potential risks associated with AI/ML integration, with a particular focus on safeguarding investor protection, market integrity, and financial stability.

The consultation paper comes amid the increasing adoption of AI/ML technologies across India’s financial markets, including advanced developments in Generative AI and Large Language Models (“LLMs”). Market participants are currently using these technologies for advisory services, risk management, client identification, surveillance, and other critical functions. Public comments on the proposed guidelines are invited until July 11, 2025.

Current Usage in Indian Securities Markets

Before delving into the proposed guidelines, it’s important to understand the current landscape of AI/ML usage in Indian securities markets:

  • Exchanges: Currently employing AI/ML for surveillance, advanced cybersecurity tools, member support via chatbots, automating compliance data input, processing large volumes of data, social media analytics, and pattern recognition.
  • Brokers: Leveraging AI/ML primarily for KYC/document processing, product recommendations, chatbots, digital account opening, surveillance, anti-money laundering, order execution, and product propensity analysis.
  • Mutual Funds: Using AI/ML tools mainly for customer support services such as chatbots, cybersecurity, surveillance tools, and customer segmentation.

Key Proposed Guidelines

The consultation paper outlines five core guiding principles for the responsible usage of AI/ML:

Model Governance

Market participants using AI/ML models will be required to:

  • Maintain an internal team with adequate skills, expertise, and experience to monitor and oversee AI/ML models throughout their lifecycle.
  • Implement appropriate risk control measures and governance frameworks, particularly during market stress.
  • Establish procedures for exception and error handling, including back-up/fall back plans.
  • Designate senior management with appropriate technical knowledge for oversight of AI/ML models.
  • Manage relationships with third-party service providers through clear service level agreements, while remaining ultimately responsible for compliance.
  • Conduct periodic reviews and ongoing monitoring to ensure applications continue to perform as intended.
  • Define clear data governance norms and subject AI/ML systems to independent auditing.

Investor Protection and Disclosure

For business operations directly impacting customers/clients, market participants must:

  • Disclose the use of AI/ML to affected customers/clients, particularly for operations such as algorithmic trading, asset management, and advisory services.
  • Provide information about product features, purpose, risks, limitations, accuracy results, fees/charges, and data quality.
  • Ensure disclosures use comprehensible language to help clients make informed decisions.
  • Maintain investor grievance mechanisms in line with existing SEBI regulatory frameworks.

Testing Framework

The proposed guidelines require market participants to:

  • Adequately test and continuously monitor AI/ML models to validate results.
  • Conduct testing in segregated environments prior to deployment.
  • Perform shadow testing with live traffic to ensure quality and performance.
  • Maintain proper documentation of all models and store input/output data for at least 5 years.
  • Implement continuous monitoring systems as AI/ML models may change behavior over time.

Fairness and Bias

To address fairness concerns, market participants should:

  • Ensure AI/ML models do not favor or discriminate against any group of clients/customers.
  • Maintain adequate data quality that is sufficiently broad, relevant, and complete.
  • Implement processes to identify and remove biases from datasets.
  • Consider specific training courses for data scientists and relevant staff on potential data biases.

Data Privacy and Cybersecurity

The guidelines emphasize that market participants should:

  • Establish clear policies for data security, cybersecurity, and data privacy.
  • Ensure compliance with applicable laws regarding collection, usage, and processing of investors’ personal data.
  • Communicate information about technical glitches and data breaches to SEBI and other relevant authorities.

Tiered Regulatory Approach

Significantly, SEBI proposes a “regulatory lite” framework for AI/ML applications used for purposes other than those directly impacting customers/clients. For internal applications such as compliance, surveillance, and cybersecurity tools, only specific guidelines would apply, including requirements for skilled oversight teams, periodic reviews, ethical outcomes, testing, and data privacy measures.

Practical Implications for Market Participants

For Brokers

Brokers leveraging AI/ML for client-facing activities like algorithmic trading, robo-advisory, or digital onboarding will face more comprehensive regulatory requirements. They will need to:

  • Develop robust governance frameworks with clear accountability structures.
  • Enhance disclosure practices to clients about AI/ML usage.
  • Implement more rigorous testing environments and continuous monitoring.
  • Review third-party vendor agreements to ensure compliance with the new guidelines.

For Asset Managers

Investment managers using AI/ML for portfolio management, risk assessment, or investment advisory must:

  • Build teams with appropriate technical expertise for model oversight.
  • Develop transparent disclosure mechanisms about AI/ML usage and its limitations.
  • Implement robust testing frameworks and fallback options.
  • Ensure fairness across all investor categories.

For Fintech Firms

Fintech companies providing AI/ML-powered solutions to regulated entities will need to:

  • Collaborate closely with clients to meet SEBI’s oversight requirements.
  • Develop more transparent and explainable AI models.
  • Provide comprehensive documentation and testing results.
  • Establish clear SLAs that address regulatory compliance concerns.

Potential Compliance and Governance Challenges

Documentation and Explainability

One of the most significant challenges will be documenting AI/ML models in a way that makes them transparent and explainable. Complex models, particularly those using deep learning or generative AI, may function as “black boxes,” making it difficult to explain how they arrive at specific outcomes. The guidelines require market participants to maintain proper documentation explaining the logic of AI/ML models to ensure outcomes are explainable, traceable, and repeatable.

Third-Party Oversight

Many market participants rely on third-party vendors for AI/ML solutions. The guidelines clarify that regulated entities remain responsible for ensuring compliance with all applicable laws, rules, and regulations, even when using third-party services. This creates a significant oversight burden, particularly for smaller firms with limited technical expertise.

Data Quality and Bias Mitigation

Ensuring data quality and removing biases from datasets will require sophisticated tools and expertise. Market participants will need to implement processes to regularly assess data quality and detect potential biases, which may require investment in specialized analytics capabilities.

Continuous Monitoring

Unlike traditional algorithms, AI/ML models can evolve as they process more data. The guidelines recognize this by requiring continuous monitoring throughout deployment. Implementing effective monitoring systems that can detect subtle changes in model behavior presents a technical challenge.

Preparing for Compliance

To prepare for the implementation of these guidelines, market participants should consider:

    • Gap Assessment: Evaluate current AI/ML applications against the proposed guidelines to identify compliance gaps.
    • Governance Framework: Establish or enhance AI governance frameworks with clear lines of accountability.
    • Documentation Protocols: Develop comprehensive documentation protocols for AI/ML models, including development methodologies, testing procedures, and performance metrics.
    • Disclosure Templates: Create client disclosure templates that clearly explain AI/ML usage, benefits, limitations, and risks.
    • Vendor Management: Review and update third-party agreements to ensure they address the requirements in the guidelines.
    • Testing Environments: Establish segregated testing environments and shadow testing capabilities.

Conclusion

SEBI’s proposed guidelines represent a significant step toward creating a responsible AI/ML ecosystem in India’s securities markets. The tiered approach demonstrates a thoughtful balance between encouraging innovation and ensuring adequate safeguards. Market participants should begin preparing for these requirements now, as they signal the direction of future regulation in this rapidly evolving space.

The consultation process provides an opportunity for industry stakeholders to shape the final guidelines. Market participants should consider submitting comments by July 11, 2025, particularly on aspects that may present implementation challenges.

As AI/ML technologies continue to transform financial markets, these guidelines will likely evolve. Establishing strong governance frameworks now will position firms well for future regulatory developments while allowing them to leverage the competitive advantages that responsible AI/ML implementation can offer.

burgeon law white logo

Disclaimer

As per the rules of the Bar Council of India, law firms are not permitted to solicit work and advertise.

By clicking the “Agree” button and accessing the website www.burgeon.co.in, the visitor fully understands and accepts that the contents herein are solely for informational purposes and should not be interpreted as solicitation or advertisement. The firm is not liable, in any manner, for the consequences of any action taken by a visitor relying on materials/ information provided on the website. The firm urges visitors to seek independent legal advice for any legal issues.