In our latest Q&A, Alexander Sokol, CompatibL’s Executive Chairman, discusses the company’s innovative artificial intelligence (AI) and machine learning (ML) initiatives. He also explains how CompatibL used Open AI’s DaVinci and GPT-4 models in its newest product for model governance, which helps quants create and keep up to date their detailed model documentation.

Q: What are some of the artificial intelligence highlights that have recently been unveiled in the latest version of the CompatibL Platform?

A: There are many things happening at CompatibL; there is always work being done on improving models and the product’s features. We have several artificial intelligence initiatives, and one of them, which we already have in production, is a natural language interface based on GPT-4 in the CompatibL Platform.

Of course, natural language interfaces are not new. In fact, back in the early days, people already had some shortcuts that the traders would use. For example, if they wanted to create a swap, they would use certain keywords to describe the swap instead of going to the screen and clicking buttons or selecting numbers. But now there are large language models, such as GPT-4 from OpenAI, as well as open-source models that banks can run. There are also many other companies that are trying to catch up. But the gold standard right now is OpenAI.

We have used GPT-4 and plug-in API to develop a totally new level of natural language interaction with our platform. Previously, you could select a screen and use settings to configure the screen and to use the workflow. Now, everything the system can do can also be controlled by textual queries. In other words, you can literally ask it, “show me a limit report for this counterparty,” “break it down by desk,” “break it down by trade type,” “tell me what will happen if we do a new swap for a 100 million notional,” etc.

CompatibL Risk Platform
Explore the best market practices, quantitative
algorithms and technical architecture of the award-winning software solution

    Get insights
    • Our experience in UI/UX design services
    • List of software architecture and UI/UX design services
    • UI/UX development process

    The GPT-4 based natural language interface is just on a totally different level from what went before. One reason for this is that it remembers the context. You can say, “now show me the same report, but with a different trade.” Or “tell me what counterparty we should do this trade with that will optimize limits or limit consumption or optimize credit risk capital.” This new AI feature—a chat-based conversation with the system—is something we believe will empower our users.

    We have seen a lot of cases where risk managers or traders are aware that there is some functionality within their system, but they do not have the time or the ability to study every single screen, every single option the system has. Sometimes they are not accessing a certain functionality that will be useful to them because it takes time to learn, it takes time to configure, as you have to go and manually configure certain settings.

    Using this new AI feature, they can express what they want. It is almost as if every risk manager or every trader has an apprentice who will essentially listen to what they would like to accomplish and then configure the system for them accordingly. And the take-up has been tremendous. There is so much interest in AI right now, and it has been such an obvious improvement in how our system is used.

    Q: What is on CompatibL’s immediate radar right now with respect to new AI-based enhancements that you are looking to roll out in the foreseeable future?

    A: We have just made a new product announcement this week, and this is the first time I will be revealing more about it. It is our new AI-based model governance product. With it, I believe we have been able to identify a pretty perfect intersection between quant models, the financial industry, and AI. Or more specifically, the large language model aspect of AI.

    Of course, quant finance is dealing with numbers, not speech. And AI is not yet good at math—even middle-school or high-school level math—let alone quant research. But there is one thing in quant finance that is really language-based, and that is model governance. Our new product addresses this sweet spot where we can use this incredible power of language models for something advanced, something related to quant finance.

    For example, when a bank’s models are modified by the quants, all the changes have to be documented. Usually, this involves a 200- to 500-page document describing a single model. And it is not one document for the entire bank: there are hundreds of models the quants must document, with each model’s documentation having hundreds of pages, charts, descriptions, explaining in great detail how the model works.

    Documenting all this correctly is critical to the reliable functioning of the financial industry, as these risk models and evaluation models are used in financial reporting and risk reporting for regulators. But very often things fall through the cracks, and the documentation is not always up to date.

    In our new product, we are using OpenAI’s DaVinci model to analyze code changes, along with the GPT-4 model to analyze version control commit logs, analyze the model’s release notes, and analyze model documents. It can look at the source code changes, look at version control commits, and suggest changes to the model’s documentation or release notes that will summarize what was done, and it can be used for validation.

    It then can be used to generate or update model documentation or release notes, which are then sent for a final review and approval by humans. Regulators will not accept anything less.

    Q: Can you share a practical example of this AI Model Governance feature?

    A: We did a case study with one of our early clients who was an early adopter of this feature. And they were able, somewhere among thousands and thousands of commit messages, to find descriptions such as “removed a currency from the calibration” or “changed the numerical method”, changes that had previously gone unnoticed but were something regulators had to know. And these are the kinds of changes that it is usually very easy to miss because they are hidden among a very, very large number of these commits.

    Using AI, we have been able to identify changes both from the commits, where there is an engineer in the version control system who had flagged this change but it did not make it to the release notes, and from the source code documentation when the change was not even in the commit log but it was made to the code. We used OpenAI for both the source code review and for document review, and in both cases, to identify gaps or to flag errors or omissions, as well as to suggest changes. And we were absolutely amazed how well it worked.

    OpenAI’s GPT model was able to understand the nuances of quant speech on a totally unexpected level. In situations where even a human would take a moment to understand exactly what a certain description referred to, it was able to understand this nuanced description and correctly place it into the release notes or the model documentation.

    Of course, we are not at the point where we can completely automate changes to code and documentation. Humans still have to review the model output. But we have found this feature has basically a two- to threefold productivity enhancement level for humans who work on this documentation, and it makes their job much, much easier, and more fun.

    Q: Are there any concerns about the applications of AI?

    A: There has recently been a lot of concern about banks’ data getting to Open AI, or to other cloud vendors, and being used for training or becoming public. There has been a lot of discussion in the industry about how to use your data securely. And in fact, this is a valid concern.

    If you talk with Chat GPT, which is not the model itself, but just a web interface to it, if you do not set the privacy setting, it can use your data for training. But the partnership between OpenAI and Microsoft Azure makes this model available within a bank’s Azure cloud. And I am sure other cloud providers will follow with either OpenAI or with other language model vendors as well.

    The way that CompatibL has set our model governance product up is that it runs in a sandbox. It is subject to the same protections as any other data running in the bank’s cloud. And it is subject to the same service level agreement (SLA). It is never used for training. It is never accessed by anyone outside the bank.

    The client knows that the data is secure, and they can use this natural language capability without any concern that they ask about a particular counterparty and somehow this data gets out into the open. I think there is really a confidence building step needed around security that must be part of AI adoption in the financial industry.

    Q: There is also a machine learning innovation CompatibL has developed recently. How can you explain what Autoencoder Market Models are and how they support clients’ forward-looking risk scenarios and portfolio valuation calculations?

    A: The development of market models has become extremely fast-paced in our industry. Previously, conventional models, which were developed back in the 90s or earlier, would be in use for a decade, or a couple of decades, or longer because of very stagnant model development.

    Even though CompatibL’s Autoencoder Market Models are only two years old, we already have (and have had for a while) customers with these models in production. We think that these are the right models for the current volatile market environment.

    Initially, we developed these models for interest rates, but now they are also applicable to other asset classes. They are perfect in a situation where there are rapid changes in the markets. We saw that the COVID crisis was a big market shock. And now, of course, there are many other factors that are causing market volatility.

    Most conventional interest rate models describe increments. In other words, let’s say an interest rate has a yield curve at a certain level today and you model the changes in the curve over time. This approach fails when there are rapid changes, or it is being used for long time horizons. Today we are seeing both these things. First, interest rates have been rising very rapidly after a long period of low rates and developed economies. Second, with the recent improvements in how limits, and credit risk more generally, are calculated that have been implemented in the industry, you need to model your portfolio for much longer horizons.

    You have to model it not for a year ahead, as in some of the previous approaches, but for decades. In both cases, when there is a rapid change or there is a long period of change, the rates, or other market factors, can go very far from today’s values.

    Just leptop
    Autoencoder Market Models Paper: Now Available on SSRN
    Alexander Sokol’s working paper “Autoencoder Market Models for Interest Rates” has been published on SSRN. Click the button below to get the full version of the paper and learn more about the new type of interest rate models based on machine learning.

    The power of our autoencoder-based models, or ML models generally, is that they learn from history to better describe where the rates can be as opposed to modeling small increments relative to today. Whereas traditional models become increasingly less powerful or more rigid and less capable of describing market regimes for longer time horizons, these ML models learn from history, and they are guided by history as opposed to a formula. We believe that these are really a new generation of models that will be seen by the industry as the right way to go because they perform equally well for short horizons and long horizons.

    Contact CompatibL
    Submit your query and one of our experts will be in touch