Kafka and JMS Mocking with AsyncAPI using Specmatic

Presenter: Hari Krishnan
Event: AsyncAPI Conference on Tour
Location: Bangalore, India

Presentation summary

AsyncAPI allows us to articulate communication channels between services clearly. What if I told you that we can leverage the same AsyncAPI specification to quickly spin up mock topics and queues to test our services in isolation?

That is exactly what Specmatic is able to achieve by leveraging AsyncAPI specifications to give us early feedback on our local machines and in our CI pipelines when our service implementations deviate from the API specifications.

In this talk I will be going over below points with live demos.
1. Ability to detect deviation in implementation at a protocol, schema and more during early stages of development to shift left the identification of potential integration issues.
2. Leveraging AsyncAPI specifications to collaborate between teams to develop and deploy microservices in parallel with confidence.
3. How this ability fits into the overall concept of Contract Driven Development.

Share

Transcript

Let’s jump right into the topic today. So I’d start with a small reference architecture and then we’ll go into a live demo of how we could leverage API specifications, AsyncAPIs, especially for mocking purposes. All right, so I have this mobile application which talks to a backend, a backend for front end rather, and that in turn talks to a domain service and then that talks to also an analytics service, just generally like three, four pieces there. Now there’s a request going here and the request response coming back and a message going here to the analytics service from the BFF that there was an interaction and that’s going over Kafka.

 

And once that’s done in parallel, you’re also responding to the application. So that’s the rough interaction, right. I’m also being a good citizen, so I have all of my interactions documented in API specifications, all the HTTP goes over OpenAPI and Kafka on our dear AsyncAPI specification. So this is my rough setup. Now with this, let’s say I want to go into testing this application, right? It wouldn’t be practical for me to take all these pieces, put it on an environment and test it all the time. I want to test small parts of it, bits of it. So let’s say I want to test this particular piece, which is the back end for front end, and that’s the system under test. And to test that, pretty much I need to tackle all of these dependencies. I have Kafka, I have another domain service and who’s the test from the point of view of, and that will be from the point of view of the application, whoever is making the calls right now, how do I test this? It looks fairly little bit imposing to say I have to get all of this isolated and tested.

 

But we have a trick up our sleeve, right? We have API specifications, can we put that to good use here? So what I’m going to do here for the test I’m going to use this open source tool that my team built called Specmatic. I’m going to run the API specification which was governing the interaction between the mobile app and the system under test. Make that as a test on this side and on the dependency for the HTTP. Again, I take the API specification for the domain service and stub it out. And more interestingly, what about the Kafka? Now I can take the AsyncAPI specification basis that I can spin up an in memory broker with the appropriate topic basis, what is there in the specification itself. And I can also have schema validation engine booted up. Right? So this is my setup. If you think about it, in this entire setup I am not writing any test. This is all like just taking the specification and leveraging it, right? So now the first step of the process is obviously when you have stubs and mocks, we set expectations. I need to let the HTTP stub know that these are the requests you’re going to receive and this is what you’re supposed to respond with.

 

And likewise, even for Kafka, I need to let it know what messages to expect and how many to expect. Right? So with that expectation setting done, that’s my setup, right? I’m going to the test. Now Specmatic is able to read the OpenAPI specifications basis that generate tests on the fly and make the request to the BFF, which in turn makes the request to the stub. The stub already knows what to respond with, right? They’ve set expectations, so it’s going to give it back, the canned response. And now the BFF dutifully pushes the message into the Kafka broker, which is the mock and Specmatic. Kafka mock will keep a record of it, I’ll tell you why. And then we respond. Once the response comes back to the contract test, obviously we’ll verify the HTTP response against the schema and the OpenAPI spec. Now comes the important part. All of this is done. How do we know that this guy sent the right message to this broker? That’s where the verification step comes in, where Specmatic will now verify. Now that the test is done. Did you receive the messages that you were supposed to receive, and were they according to the schema and other details in the AsyncAPI specification?

 

Now that’s our test setup. Okay, so now that I’ve given you a lay of the land, sort of a visual reference of where this is going, let’s jump right into a demo. So let me quickly switch over to my ide. I have a Springboot application here, very straightforward springboot, something that I just built from start spring IO. And I have the BFF app here, the controllers and everything. And then I have the contract test, which is here. And as mentioned, I only have setup and tear down. As you may see, there is no tests. Interesting, right? It’s just set up and tear down. Where are the tests going to come from? Specmatic is going to figure it out. But before I get into it, let me kick off the test and then explain the process as it goes. So I’m starting the test now. While that’s running, how does this contract test know where to pick the specifications from? How is it getting that lay of the land that’s coming from this configuration file called Specmatic Json. Now this is an exact reflection, I’m not sure if this is big enough. Is this readable? All good.

 

Okay, so you remember, right, I was using the specification between the mobile app and the system under test as a test, and the remaining two pieces, the HTTP OpenAPI specification and the AsyncAPI specification covering the Kafka. Those are my stubs and mocks. So I’ve listed it like that. That’s pretty much it. Very declarative. I’ve said, these are my specifications, use them in this mechanism Dependencies test. Now with that, you notice meanwhile, the test also passed. And if I go to the very end of it, it’s also given me a report, an API coverage report of the API specification. Of course, this is pertaining to the OpenAPI. It said it found find available products as a path, and that was covered. It was executed as part of the test. It found one more path in the app which was not in the specification. So it’s also highlighting that. So which is why your coverage is at 50%. Right, just calling that out. But that’s on the OpenAPI side. Now let’s do something interesting with AsyncAPI test passing is never fun. Test got to fail. So let’s look at the code a little bit.

 

So this is the products controller which is receiving that request. And if I slowly go into what’s happening here, it in turn invokes the order service to find the products, which is the domain service, and that in turn has this code to send the message to the Kafka topic. Now, I was just playing around with the code and unwittingly I commented out that particular line, right? I just made a mistake. Human to make mistakes. So I’m going to run the test now. What do you think is going to happen? What should happen, what should be the hypothesis? Sure, let’s see if that happens. So obviously what’s passing there is the OpenAPI test. Correct. It got the response back because it’s asynchronous. So HTTP side is passing. But right there, like the gentleman just pointed out, I expected three messages as part of my test, because I was doing that on this particular topic called product queries, but did not receive any message. Now where is all this coming from? Now let me quickly open the API specification itself, which is the Kafka Yaml, which I was showing you. Now you see that there is a channel called product queries.

 

That’s how it figured out that this is the topic on which I need to be listening. And then it also says these are the schema and details. But how did it know to expect three messages? We need to look at the code for that. So let’s look at the contract test. Now you saw this two collapse sections, right? Which is basically the setup and the tear down. So let me quickly go through the setup. The first order of business, I’m starting the HTTP stub server, setting the expectations. Expectations are nothing but canned responses, right? Given this request, give me this response. As long as that adheres to the API specification and the schema, we’re good, right? So I’m setting up one file for that. Then I start my Kafka mock, and I say, on this particular topic, product queries expect this many number of messages. I can only expect messages on topics that are already part of the sync API specification. I cannot veer away from it, right? The important part about mocking is that next thing is I start the Springboot application itself, which is our system under test. And then that’s it. Now this whole class, if you look at it, the contract test class is extending from Specmatic J-unit support.

 

And that’s where the magic lies. Because once you set it up, the remaining pieces, it will figure out that I need to go pull the specification, figure out what test to create, and then it just starts firing the request. So again, going back to my whole point is I just have specifications and I didn’t have to write any tests for myself. And who doesn’t like free tests, right? So that’s always been my go to thing, right? So now that you’ve seen the fact that it’s fairly trivial right now, can we do something more interesting? I know that if I don’t send a message, sure enough it caught that issue. Can I do something interesting? Like say, what if I send it to an incorrect topic can happen, right? I misspelt the topic and something happened. So I’ll kick off the test again. What’s the hypothesis now? Obviously it’s got to say that it went to, I mean, I didn’t receive a message on the expected topic. Anything else that you expected? You expected three on product queries but did not receive any. However, this is a mock, right? So what’s the difference between mocks, stubs, fakes, proxies?

 

If you go into that purity beauty of a mock is anything other than what you expect it to do. Anything else it receives, it starts complaining. Right? That’s exactly what it’s complaining about. I did not receive it on this particular topic, which you expected, but I did receive it on some other topic, right? So I’m going to point it out. This is great feedback for me as a developer. I can immediately go fix it, right? It’s usable feedback for me. So that’s one. Okay, what else can we do now? We looked at not sending the message. We’re sending it to incorrect topic. How about the message itself? If I could modify, can I change the schema? So here’s the product message and let me go add something to it, right? Maybe I’ll say price. It’s a product message. Got to have price. I’ll set it to zero. I set that to zero and then let me run the test. Now, if you notice my Kafka yaml again, my payload is only defining name, inventory, id, and categories. It does not involve the price, right? So let me kick it off again. Excuse me, what is the hypothesis now?

 

And there you go. You have very helpful feedback here which says payload price was unexpected, not part of the specification. So none of this I am having to actually keep in tab, right? The specification is there and it’s keeping me in check. As a developer, I need guardrails. Especially I make stupid mistakes when I write code and I want guardrails to constantly tell me where I’m going wrong. And feedback is great. So that’s quick overview of what’s happening here. Now, how is this all verified? At the end, I showed you the setup of the contract test. So now let me show you the tear down, which is the after all method. Now what’s happening here is after I close the context, I shut down the Springboot application, I shut down my HTTP stub, and finally I come to Kafka. And then I say, did you just wait for the messages that I’m anticipating? And once that is done, I verify expectations. And that’s how I catch these errors right now, this can be done, even command line approach. But this is how I like it, which is very much integrated into my component test setup, because this can run in my CI, right.

 

And it’s super fast. I was just running it so many times and messing around with it and it just works. That’s the beauty of having that feedback going. All right, so let’s quickly get back and recap what we just saw. Kafka, mock validations. Why is this so important? Many things can go wrong, right? There’s just two systems, and you invoke one and it has to drop a message into the other. What could possibly go wrong? Lot of things, actually. You could send the right message on the wrong topic. You could send the wrong message on the right topic. You could send incorrect number of messages, or you could send them out of sequence. A tonne of these issues can happen, right? And this is just two systems I’m talking about. Companies I work with have hundreds and several hundred microservices of that order, right. And it’s a distributed system problem. And it’s hard, definitely to say, I’m going to depend on integration testing to get this out of the way. Integration testing, while I cannot rule it out, but it’s not the option to find this kind of issues, compatibility issues I should be able to find on my local environment, right?

 

Let’s say I have a consumer application, they are doing their CI these issues, which I pointed out not likely to find in unit testing. And even if you’re component testing without mocking, or you’re mocking without using a specification, you’re not likely to find the issue. Likewise, the provider live in their own universe. The only time you find an issue is when you actually start deploying to an integration environment and then boom, you have a bug. And that breaks the entire integration environment. And these are just two services which are broken. But usually that has a cascade effect on the integration environment, right? Everything goes down. How many sits can you spin up? Even if I have ephemeral environments, not viable and that blocks your path to production. And obviously it disgruntled users. And the cost of doing this is also very high. If you find your issues here, you’re already quite crimson, right? Not viable. You want to find your issues early, but how do you do that? So that’s where you start leveraging your OpenAPI, AsyncAPI specifications to start service virtualization and so that you can independently test your systems. But is this enough?

 

Do you think this will work? Why not? Any guesses? Okay, the reason is for all I want. I can stub, right? But I’m still living in my own dreamland, right? I believe this is working in a certain way. I believe it is that AsyncAPI specification. I can be a good citizen, I’ll adhere to it. I’ll use Specmatic mocking to work with Kafka. But if that same specification is not being verified against the provider of the other side of the equation, then we are still not on same terms, right? Both parties have to come to agree on that contract. Otherwise it’s not a proper equation. So which is why testing the provider is also important. Now, I know this topic, which I came to you today, is more about mocking and stubbing, right? But like I said, mocking cannot be divorced from the fact that the other side verifies it. So I’ll give you a very quick example of what is possible. I mean, it’s very popular request reply pattern for people who are moving slowly away from synchronous to synchronous. So again, if you have a sync API specification, Specmatic can test the system purely based on that, right?

 

So let’s say we have a Kafka broker and Specmatic is configured to pull the AsyncAPI specification and some test data. Obviously I can’t randomly generate, right? In my previous case, also, I had examples in the OpenAPI specification which were being leveraged to generate the tests. Otherwise the data will be junk, right? I would just be figuring out based on data types. That’s not always possible to say that’s okay. So now I have the test data and basis that I’ll also payload validate the test data which people are sitting there. It has to be in sync with the specification and then I can run a contract test. What do I mean by that? I know the specification, and in the AsyncAPI specification, especially for request reply pattern, at least the producer pushes a message back on the reply topic. Right? Now, Specmatic would read that and verify it against the schema to make sure it’s adhering to that. And both these topic pairs, they’re considered one unit because it’s a request reply, right? I mean, there are several patterns in EDA architecture, right? Which is you could have like fire and forget, you can have a request reply or you can have a fan out situation.

 

But for request reply, this is just an example that I’m showing. Once you get back that response, obviously that is one loop. I keep doing that for several all the topic pairs. Once that is done, I should be able to generate and figure out what my report looks like I can validate for the topics, the offsets, whether the offsets were read properly. I would be able to validate for the counts of messages which you already saw. I’d also be able to verify payload schema, which gives me very comprehensive coverage of API specification. And in the process, I’m also getting free tests for my system. Right? Okay, so let me quickly show you, while I cannot, in the interest of time, cannot go into the demo of a test, what I will do is show you the report of a test from a request reply pattern. So I have a product yaml, which is more like a request reply. We have paired up the topics here. Both of those steps have to pass in order for this fellow to be considered successful. And Specmatic allows me to dig down, go into the payload itself, look at what’s happening there, and as you can see, there is this correlation id.

 

There is a reply channel and then the reply comes back on the reply channel and the correlation id is the same. All of this is also accounted for. So that’s the testing side. Again, to reiterate, when I mock on my side with the API specification, that’s only half of the equation, right? I have to make sure the other party lives up to that contract by running it as a test. So only when LHS and RHs are equal, we are balanced, right? Cool. Coming back to the slide now, I asked you the question, is mocking sufficient? Then I said no. You will also need the testing on the other side. Now I am assuming we all agree testing is valuable. Now testing is done, but is that still enough? We have service a, service b, this is the provider, this is the consumer. We have an AsyncAPI specification in between. Both teams are agreeing that we’ll do both these things. Is this still enough? Is there something else missing? One critical piece which I have found in my experience is I can again miss updating the specification. I could be sharing my AsyncAPI specification over a portal, or I could be.

 

I’ve seen teams putting it on email or a shared folder, pick it up from here, but I forgot updating it. Or maybe I am the consumer, I forgot taking the latest version. Now what happens? Even though we have all this sophistication of saying I’m going to run mocking, I’m going to do testing, all of that, we still live in our own dreamland and we build stuff according to our own whims and fancies. I have an SKU here and this person’s expecting a price. The payload is completely off. Right? I’ve seen even more silly issues, like there is a single camel case versus a full capital case. Stupidity is leading to complete systems coming down and that leads to broken setup. Now all this to no avail, right? Mocking testing. How do you solve this? That’s where we believe you need a single source of truth. And you need to start treating your contracts as code. AsyncAPI specification, OpenAPI specification, your WSDL, whatever it is, it needs to be treated like code. And where does code live? Git. The best place, in our opinion, is git, right? So you put your specifications into git.

 

So which means everyone has a common ground to come to, right? And you’re immediately updated of it. And that also puts an interesting perspective, because you have a pull request process. Now all of this I believe API specifications are more than just documenting, right? It’s a mechanism for me to collaborate across teams, right? And that collaboration aspect and that design aspect is the more important piece. And that practise can be fostered with a pull request mechanism. There’s a committee of people like, who are all interested in that particular service, who can review the PR so someone can propose the change. You can push standards onto it, run linters, spectral, or vacuum for that matter, run the linters, get your specifications in order according to your organisation standards. Then you can run backward compatibility. This is something which Specmatic can definitely help with. It can do cordless backward compatibility testing. What I mean by that is it can take the version from the branch or the PR, take it from the head, take the two versions, and do a codeless comparison. So, which means even before I go and implement something, I first design the change or propose the change over the specification.

 

And that between two versions, am I breaking something that’s existing or how is this going to pan out? I can get a sense of it just by running a no code comparison. And that is great feedback, because I don’t have to invest in the effort of writing code for the implementation, and then I review and merge. So that solves the problem of being on the same page, the single source of truth. Now that you know that I have a single source of truth, and you know that Specmatic can pull from the central contract repository, right? If you recollect in all my examples earlier, when I showed you the Specmatic JSON, this line I had skipped explaining at that time, this says, pick it from this git repository. So whenever I’m running the test, it’s always checking if I’m on the latest or not. So this is where it’s getting all the values. So I just had a time check. Thanks for that, Tali. And moving quickly. So now that I know I can pull from central Repo, I can run stubbing on the local for the consumer, I can do contract testing on the provider’s pipeline, right?

 

And then if you have done this all along in your local and in your CI, you can deploy with confidence to your integration and you know for sure it’s going to work, right? Because at least you are adhering to the specification, 99% definitely going to work. Then you have a clear path to production. And this is a better place to be because any issues you’re identifying compatibility is over here in the green, not later in the cycle, right? So that’s the shift left aspect of it. So, moving quickly, what more can Specmatic do? Now I showed you the open source part of it, right? The part that you can use on a day to day basis to get developers feedback. But there’s also the part about gathering the data and using that information. That’s where insights comes in. I’m able to pull architecture diagrams out of it. I can figure out your dependencies, you have HTTP dependency, Kafka dependency for each service. And then I can figure out whether the contract driven development itself, which is a bigger overarching concept, is being adhered to or not. And all of this, and to quickly give you a live overview of it, I have this force diagram here, which represents a microservice system which has published data from various environments.

 

CI sit wherever the stubbing is running, and when you click on each of them, you can see the path like who’s the consumer, who’s the provider? And also visual cue, right? There seems to be some sort of a bottlenecking going on here. One service seems to be like central to all of them. And there is this UI which looks like that’s going to this back. BFF services and the BFF all congregating around that central service seems like a failure point or a bottleneck. And you can further drill down, look at what’s happening here, how many of these services are going into that, and also get a lowdown on each of the type of dependency, which particular queue is it going to, that level of detail. So it’s a single pane of glass in terms of the visual representation. So with that, let me quickly come back to the slide. I think I’m running short of time, but JMS mocking is very much on the same lines. I’d just be replacing Kafka with the active MQ version of it. I have an open source project sample available for this. I’ll share this after the talk. I’d be happy for you to take a look and give me some feedback on it.

 

There are also videos on that. And quickly moving from there, do we stop at that AsyncAPI? Actually, with Specmatic you can also do database stubbing. With JDBC stubs you can also do Redis. Redis happens to be our resp protocol, so I can stub redis also, which means what we’re trying to do is isolate systems over specifications and protocols. So that’s quickly about it. And I’d like to share a quick note of thanks to the AsyncAPI community and thanks for selecting me as a speaker. And just a quick thanks to my team. These are all the people who are doing this, and of course the community itself, we cannot not acknowledge. They are a big part of our open source project. We get a lot of contributions. So thanks again to all those people who’ve been helping us, and we’d love to see some more contributions or pull requests or questions. So with that, thank you and I’m happy to take any questions if we have the time, or I’m here. So you can always sync up with me in the corridors. Thank you.

 

Question:  Like, it just tests the correct test cases, what about the error ones?

 

Answer: That’s a brilliant question. So what I showed you is a demo of some of the positive test cases. So we also do something called generative testing. When we do generative testing, it’s a little bit of an inspiration from mutation testing and property based testing. So we know that the schema is X. It’s got this data type, let’s say a string, right? Or an enum. Now, what I can do is can I send other values beyond it? And does the application break under that incorrect input? That is something we can test. And I can’t say it’s purely property based up mutation, but the mutation here is not happening in the code, but the mutation is happening in the request. So we have a concept like that called generative test. I’m happy to walk you through that, but you can look it up in our documentation. Also, generative tests is what it is called.

 

Question: So we have seen that the JSON is the standard format, right? So will it work for XML or any deal?

 

Answer: We do support WSDL also and SOAP. So WSDL, SOAP also we have tried, we have done XML over HTTP and we are trying to do other protocols, but practically it’s just a matter of which area we give more importance to. But at the moment, to answer your question, we do support XML.

 

Question: In the initial architecture, there is one message that is going to Kafka, right? So if it is a conditional, say, suppose I have two more topics and condition one go to this, condition two go to this, condition three go to correct. So is there a way to set the expectation at the test in that case?

 

Answer: That’s an interesting point. It depends on the test setup. I would have to figure like the way I would do it is see expectation, you know. Right. You are governing the test setup. So if your test data for the domain service is what is governing which if condition it will take and probably whichever topic it has to post to, I am imagining that you should be able to set that expectation ahead of time because you’re predicting it, right. They’re setting expectation is a hypothesis, but this is my input. This is the background test data on the basis of which I expect the message to go to X topic. Then I’d say expect it on X.

 

Question: If the expectation has to go to topic one, but there are more listeners there. So it should not complain that, I didn’t receive a message?

 

Answer: It will not. If you say I’m expecting X one message, it’s only expecting on that.

 

Question: Can we also use Specmatic outside of a Java application? Let’s say I’m a node JS or a go developer. Can I not follow I am a node JS or a go developer. Can I also use a Specmatic from my unit test or is it in Java?

 

Answer: No, you can use it anywhere. Specmatic is just an executable. At the end of the day. The wrapper that I was showing you is more of a developer experience convenience from a Java point of view. We have node wrappers, we have python wrappers just for convenience. But nevertheless, if you do not want to use any of those, we have people from Rustlang community using it as more like a command line tool itself.

 

Thank you so much, Hari.

 

More to explore