With technology fully realized on mobile devices, wearable devices, and the Internet of Things, more and more decisions in the world of business today are event-driven and automated. And no other market is more automated than the capital markets. In fact, analysts project that in 2015, over 80% of the world’s stock trading volume will be driven by algorithms. But that automation comes at great risk – one firm, Knight Capital, famously lost over $440 million in 40 minutes in 2012. So, with so much automation, how do we keep the markets safe for investors?
Ellis: Welcome to the TIBCO podcast. I’m your host, Ellis Booker. Today’s topic is continuous surveillance and compliance. Let’s start with the fact that the world’s now racing toward mobile devices, wearable devices, the Internet of things, and real-time systems. So more and more decisions in the world of business are event-driven and automated. And no other market’s more automated than the capital market. In fact, analysts project that by 2015 over 80% of the world’s stock trading volume will be driven by algorithms.
But that automation comes at great risk. Let’s not forget that one firm, Knight Capital, famously lost over $440 million in 40 minutes back in 2012. So with so much automation, how do we keep the market safe for investors? To discuss this and more, we have with us today Joe Weisbord, Managing Director, Execution Trading Systems, at ConvergEx, one of the largest electronic stock agency dealers in the United States. Welcome, Joe.
Joe: Hello, Ellis.
Ellis: Also with us is Mark Palmer, Senior Vice President of Engineering at TIBCO. Hello, Mark.
Mark: Hi, Ellis. Hi, Joe. How you doing?
Ellis: Joe, tell us first about ConvergEx.
Joe: ConvergEx, which was founded in 2006, is an agency focused, global brokerage and trading-related service provider. ConvergEx delivers comprehensive solutions that really span the global high touch and electronic trading. We go into options technologies, prime brokerage, clearing and commission management. We have nearly 3,000 clients accessing over 100 global markets, averaging more than 100 million shares trading daily. I am the Head of Development for the U.S. and international equity trading systems.
Ellis: Joe, I understand your system processes at peak rate 600,000 messages a second which is, I got my calculator out, billions of events a day. Can you explain how that works?
Joe: Sure. Clients from around the world send orders 24 hours a day, 6 days a week, through our client gateways. The orders then go to our order management systems and are routed either directly to markets or to our smart and algorithmic engines. These engines create multiple orders by using quotes and mathematics to generate what we call child orders that get sent to the markets. We receive over 100 million orders per day. When you add in cancellations and executions, the number of messages total over 500 million a day. We see peaks of 100,000 messages per second in our trading system, and just to add to the mix we also need to process market data at rates of about 500,000 messages per second.
We then took all this data, we put it into TIBCO’s Stream Base’s complex event processing system. This product was able to take almost anything we threw at it and really sought out the relevant pieces allowing us to intelligently filter, correlate, and aggregate the data. The ability to detect patterns is really the obvious key to all our monitoring systems.
Ellis: Let’s bring Mark Palmer of TIBCO into the conversation. Mark, Joe’s firm is doing something you’ve called continuous compliance. Can you explain what that means?
Mark: Sure, Ellis. So, you know, I think the popular view of automated trading on Wall Street is that it’s all about automation of trades, right? Like taking the action or taking the trade. What I think is interesting about what Joe’s team has done is it’s really using the same technology real-time and algorithms, but in a sort of surveillance and compliance, and risk management way. So it’s a way of computing that when somebody says it’s real-time, it’s really about real-time monitoring what’s going on. And then finding and taking like billions of events a day like Joe’s system is processing and whittling it down to the few hundred that matter. That’s actually almost a quote from you, Joe, at one point, a panel we were on once.
And so I think it’s interesting because it’s applying the same algorithmic computing model continuously to the trade flow that’s going on and then using it to keep the markets, in this case, safer and to make sure the trade flow is all surveilled and proper and accurate. So I think it’s a part of an overall trend or continuous computing. I think Joe’s application and infrastructure is a great example of it in a compliance, risk management, and surveillance space.
Ellis: Joe, would you agree with Mark’s definition there of continuous compliance and surveillance? How does ConvergEx view the world?
Joe: Well, when it comes to compliance, the old methodology of end-of-day processing with next day investigation just is not going to make it in our current environment.
We really needed a very dynamic approach to keep up with the detection of really the manipulative trading patterns and regulations. An interesting byproduct of the real-time compliance is that we can now investigate while our customers are still at work. Investigation is much easier when it just happened as opposed to trying to recreate what’s going on days later.
Ellis: Very interesting. So, Joe, let’s get down to the tactics of this. How do you whittle down these billions of events into the few that matter?
Joe: Well, we spent a lot of time in devising algorithms and methodologies to reduce the false positives and assess the validity of detected events. One thing that really helped us was the development studio in Stream Base’s product because it really gave us the ability to quickly build and refine monitoring strategies. Coupled with the sophisticated debugging and back-testing tools, it really allowed us to try out new things very quickly. So we were able to take these 500 million trade messages and actually knock them down to 1,000 alerts a day.
The next thing we did was when working with the real-life data and with the direction of our compliance department, we also designed and implemented a collaboration and research work flow that allowed us to really automate the whole process from detection to resolution. So we used the data, the product, and then we added a work flow onto the back of it to really have our compliance system run in a very efficient manner.
Mark: I was just going to emphasize that that is a good description of, I think, a common model that’s all around automation today which is you’ve got this real-time processing and then you balance it properly with the human action as a result of the algorithmic detection that you’re making in the core infrastructure. I think what Joe’s done represents sort of the modern way of automation systems where they’re balancing human action with sort of machine informed detection and alerting.
Ellis: The question that occurred to me was did you build this work flow in the compliance department, but did it change over time based on the data type and the volumes that you were getting? I mean, did the data somehow influence the creation of that work flow, or were you pretty much good to go once you conceptualized the work flow?
Joe: Well, actually it took us quite a while to get down the false positives. It did not, you know, so the work flow, we kind of defined it, we tried it out, and then we changed it as we went. The thing that took the most of the time was really going through this data and coming up with new strategies to decide what to attack. And, you know, you mentioned do things change? Things change all the time. We really need to stay on top of this because you remember the real goal here is to make sure we protect the markets.
And it’s not that your customers are doing something wrong, but you do need to keep watching the data because things do happen. This is a very fluid market. There are a lot of trades here, and as I guess you mentioned at the beginning what happened with Knight, it didn’t take long for that to happen. Things happen really quickly, and you really need to stay on top of it.
Mark: Yeah, one thing I would add to that is that, you know, that point that Joe made about rapidly identifying new algorithms. Apply is probably as much to the evolution of the intelligence of the algorithms, Joe, and I wondered if you agree with this, as it does in an initially developing system. In other words, there was an analyst that did a study that said that the shelf life of some algorithms on Wall Street is just six weeks.
In other words, if they’re effective when you release them, then there are other players in the market that will figure it out and counteract them within six weeks. So you constantly have to be evolving and especially for risk and compliance and surveillance systems. Right? You have to evolve just as quickly so it’s not so much about just getting to market quickly, it’s about continuing to back test and evolve and come up with new algorithms which I think is what you’re really saying, Joe, is the core of what you’ve been doing for a while now.
Joe, No, right, absolutely. So you start with an idea, you do it, but it really needs to be watched and continually maintained. It’s actually a lot of fun, to be honest with you.
Mark: Yeah, well, it’s almost like designing a game and the interaction of game, and it strikes me it’s very different than traditional IT system, because traditionally you would build a system, and then you just sort of leave it there. It might run for 20 years before somebody adds it, you know, significantly changes the function of it. But here you’re talking about changing it constantly as the markets evolve and people come up with other ways to sort of game the system or just make mistakes or market structure changes or whatever. So I agree with you, Joe. It’s really kind of a fun sort of mathematical problem that never stops being a challenge, I suppose.
Ellis: Right. So, Mark, I’m curious what you make . . . broadening it past the capital market, the sort of technology challenges that firms like ConvergEx are facing to make this kind of automation, event-driven decision making and so forth, happen.
Mark: Well, I think, you know, to me it fundamentally boils down to dealing with data in motion, you know, versus data at rest. The things that Joe talked about, about back-testing and simulating the algorithms and the detection of algorithms that they’ve designed, it’s really about sort of temporal programming, temporal decision-making, and dealing with, you know, I’ve got peak rates of 600,000 messages a second. You know, you really have to simulate the way your system behaves in an accurate way to learn the right way to sort of reduce false positives. Because if, you know, you’ve got a compliance officer getting 1,000,000 alerts a day, well that’s not very helpful. You have to somehow kinda come up with the patterns that are effective.
So I think the seminal challenge in any system is just dealing with data that moves that fast. And then optimizing the algorithms through, you know, analytical techniques to figure out the right algorithms. And that’s a data science challenge, and it’s one that’s sort of the modern, I think, big challenge in IT is how do you design these algorithms.
Ellis: Exactly. And Joe, you said earlier that this took quite some time to build and you’re still working on it. It’s iterative. So my question is, is there any advice based on that experience you could give to other CIOs who are starting down this line of automating really massive amounts of data very quickly?
Joe: Well, I don’t usually give advice, but I’ll give it a shot. So we had a concept and decided not to let the usual roadblocks of how it was done in the past stand in our way. So our first concept was we wanted to keep it simple which was capture all the data, okay? Sounds like an easy concept, but when you deal with a lot of systems, going back and changing the core systems is almost an impossible task.
So we took a little different approach, and we just basically said, “You know what? We’re going to take this data from everywhere however we can get it. We took data off of message buses. We took data off log files. We took data . . . we turned on system parameters to generate duplicate messages. We even arranged for duplicate data to come in from external sources. And our philosophy was we didn’t care if the data was duplicated or what format it is in. It was a lot easier for us to read a different format than to change the core system. We could always enhance the transport later which is what we’ve done over time. Okay?
And so the next piece of advice would be you gotta get yourself a tool that can process and analyze the data. Now you don’t want a simple BI tool, okay? What you really want is a product that kind of gives you the same power and flexibility of building a system. I sometimes get asked as to what my developers’ initial reactions were when we said, “Let’s go get a product like Stream Base as opposed to doing it in a regular language.”
I started my career in the age of 4GLs, and I love higher level languages. So when I first brought up the topic, I would hear statements like, “This is too easy to be useful. It cannot possibly do everything that I can already do in Java or C-Sharp. Do I really need another language?” So I wasn’t able to convince my hard care C++ guys, but the Java and C-Sharp developers, especially the ones who wanted to concentrate more on business logic, they got it. So they’re now converts. They saw the power that could be derived by a simple diagram and how that translated to a much quicker time to market.
I guess my last piece of advice is keep coming up with new ideas. So our compliance system actually started as an operational monitoring system. So we got all the data, and we monitored the actual running of our system, and when we were done we kept thinking of other things to do with the data, and we came up with compliance. In fact, right now we’ve just started working on our next monitoring compliance effort which combines algorithms, monitoring concepts, and lessons that we’ve learned from the previous projects. We’re actually calling this thing Money Velocity. And while the previous focus had been client-centric, we’re now moving towards a ConvergEx or firm-centric monitoring system.
So we’ve enlisted our quant team to help us create sophisticated mathematical algorithms which will help us monitor both the inbound client trading activity, as well as the outbound trading activity for the firm. We’re also going to create the ability to dynamically tune the algorithms based on market conditions and historical trends. So we were able to create an environment where we not only produced successful products, but the developers really had a good time and a fun time building them.
Ellis: Great answer, Joe. As a final question, Mark, the capital markets are not the only place where you’re seeing these sorts of installation. Can you talk about some other places that are deploying sort of event-driven systems that work on massive amounts of streaming data?
Mark: Yeah, well, you know, I think everybody knows about the capital markets and how much data there is and how fast it moves, but we’re really seeing an uptake all over the place. And it’s especially driven by mobile-connected devices and Internet of things applications where real-time streams are being generated that, you know, sort of rival the rates that you see in capital markets. So, for example, you know, in public transportation a lot of areas around the world they’re building sensors and GPS into planes, trains, and automobiles to be able to monitor in real-time a complex network of public transportation and do things like avoid congestion and react to ways that they can save fuel or schedule more efficiently.
We’re seeing the same kind of continuous computing model in monitoring really expensive industrial equipment. In the oil and gas market, for example, we had one of our other podcasts that talks about monitoring sensor readings coming from oil wells and, you know, some firms that are saving a hundred million dollars a year and just making sure that the oil wells are not having problems and doing predictive maintenance on them.
You know, we’re seeing in the whole logistics industry, the norm now is that customers want to know, they want alerts just like Joe’s compliance people get alerts for different things. But customers want alerts when their package is being delivered or when it’s been delayed or where it is. And those are all continuous computing real-time examples. So, you know, you could go on and on. In the telco field, even in healthcare, you know, lots of different applications that I view it as exhibiting a similar kind of architecture and challenge that I think Joe’s system really nicely represents.
Ellis: And that has to be the last answer. We’re out of time. I’d like to thank Joe Weisbord of ConvergEx and Mark Palmer of TIBCO for being on this episode of the TIBCO Podcast. Thanks for listening.