Some time ago I played around with Ocelot, since I needed to create an API gateway for a personal project I was working on. By the way, I still think that Ocelot is a very cool, easy to use and extremely useful library so chapeau to the creator! Right now things got a little bit more serious since the customer I am currently working with wanted to create an API gateway for their dockerized microservices application that we are building from scratch. So Ocelot was of course a natural choice and seen that it is mentioned a lot of times on Microsoft docs it was not too hard to convince the customer that the library is reliable. The Ocelot documentation is in my opinion really good in explaining things and that’s why in this article I would like to share some of the experiences I had the past weeks with Ocelot in a dockerized application.
Routing with Ocelot and Docker
Ocelot’s primary functionality is to take incoming HTTP requests and forward them on to a downstream service. To do that we would need to configure a ReRoutes array in the ocelot.json file. A typical and simple object in the ReRoutes array would look something like this:
1 2 3 4 5 6 7 8 9 10 11 12 | { "DownstreamPathTemplate": "/api/posts/{postId}", "DownstreamScheme": "https", "DownstreamHostAndPorts": [ { "Host": "localhost", "Port": 80, } ], "UpstreamPathTemplate": "/posts/{postId}", "UpstreamHttpMethod": [ "Put", "Delete" ] } |
Seems easy here. But what host and port should we use when running everything in Docker?
Ports
Defining the ports is really easy because you can specify the exposed port in the Dockerfile of each service by including something like this:
EXPOSE 5500/tcp
ENV ASPNETCORE_URLS http://*:5500
Of course when building everything up it’s up to the used orchestrator to build the underlying network and make everything work. For now we haven’t use anything fancy like Kubernetes. We rely instead on docker-compose. And in docker-compose.yml file you can simply specify port mappings like:
1 2 | ports: - "5500:5500" |
Of course we used also an external port here so that we’ll be able to locally access the services through localhost on the same ports (to not get confused). So we can simply use the exposed ports for the port value in the ocelot.json file. What about the host?
Hosts
Things get a little bit more complicated here. It’s clear that we can’t use “localhost” as provided in the ReRoutes sample above. So the first time I was running Ocelot in this dockerized environment my thought was to use IP addresses. Seems legit. However I soon found out that, well, the services did not get the same IP address after each docker-compose up command. So the next thing I did was configuring a network in the docker-compose.yml file. Now I was able to assign static IP addresses for all services and use the IP addresses as hosts in ocelot.json.
However I also saw some drawbacks in this approach. First of all it adds unnecessary complexity to the docker configuration since you have to explicitly take care now about anything that is network related. Second, what if at a certain point we’ll run several containers running the same service and load balance the traffic? It’s true, Ocelot supports some load balancing, but adding a Host-Port object to ocelot.json for each container doesn’t seem very maintainable for me.
So I researched and found out that when running docker-compose up Docker will configure a default network for you and while doing that it will add for each service an entry in the hosts file of the corresponding server instance containing the friendly name of the service that you specify in docker-copose.yml.
For instance, if we have something similar to this in docker-compose.yml
1 2 3 4 5 6 | accountapi: image: myapp/accountapi:latest container_name: accountapi build: context: . dockerfile: Account/Dockerfile |
..then we would have an entry in the hosts file for “accountapi”. So I can simply use this friendly name when configuring hosts in ocelot.json. That is really cool and fits the purpose for now.
The only thing we will need to be very attentive with is that when we’ll use another orchestrator to do a real deployment (theoretically we could still use docker-compose also for that, but it’s not 100% decided how we’ll do it in the end) we’ll have to make sure that we’ll add similar hosts file entries with the same friendly names.
One final challenge
A final challenge I had with Ocelot is not strictly related to Docker but it’s still annoying. I’m not sure for what reason, but Ocelot seems to ignore my BaseUrl in the ocelot.json file and hence it doesn’t set the location header correctly on 201 HTTP responses. So I get something like “http://accountapi:5500/{and the enpoint}” when I would expect something like “http://myApiGw/{and the endpoint}“. I remember I had the exact same issue last summer when playing around with Ocelot.
I have even opened an issue on the Ocelot repository and shared the configuration file but we didn’t manage to find out why that was happening. So to not waste a lot of time on this again it took me literally ten minutes to create a middleware that manipulates the location header on HTTP 201 responses based on values that I configure in appsettings.json. If you’re curious, that was the background for the article on manipulating HTTP response headers.
If you have any thoughts on Ocelot in a dockerized application feel free to comment here. I’m still fairly new to the whole Docker thing, so any opinion would lead to a discussion from which we’ll all learn something. Cheers!
How useful was this post?
Click on a star to rate it!
Average rating / 5. Vote count:
Dan Patrascu-Baba
Latest posts by Dan Patrascu-Baba (see all)
- Configuration and environments in ASP.NET Core - 25/11/2019
- GraphQL in the .NET ecosystem - 19/11/2019
- A common use case of delegating handlers in ASP.NET API - 12/11/2019
I am struggling to make ocelot works with host ports. It works in charm if the proxy service run with out container and all other downstream are in container. But the moment when I make the proxy service containerized, it fails.
I am trying to make use of Ocelot to redirect a request from a Web application using HttpClient to an Asp.Net Core WebAPI running in Docker. Ocelot is also running in a container. Both the containers (ProductManagementAPI and Ocelot) are created using docker-compose.
Below is my ocelot.json file:
“ReRoutes”: [
{
“DownstreamPathTemplate”: “/api/product/product”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “productmanagementapi”,
“Port”: “8000”
}
],
“UpstreamPathTemplate”: “/product/product”,
“UpstreamHttpMethod”: [ “Get”, “Post”, “Put”, “Delete”, “Options” ],
“Priority”: 1
}
],
“GlobalConfiguration”: {
}
After running “docker-compose up” command when I directly browse: http://localhost:8100/api/product/product, I get the desired response.
But when I make a request from Webapp running locally to get the response from the API running in the container via the Ocelot Api gateway (also running in the container), I get the below error:
ocelotapigateway_1 | requestId: 0HLTAOJJEL03J:00000001, previousRequestId: no previous request id, message: 403 (Forbidden) status code, request uri: http://productmanagementapi:8100/api/product/product
Even if I replace the Downstream host in the ocelot.json file to “localhost” I get an error.
I have not enabled authorization in any of the above applications.
When I directly call the WebAPI running in docker using HttpClient from the WebApplication, it works. But when placing Ocelot gateway in between, it stops working.
Typo mistake: port in my question above should be: “Port”: “8100” and not “Port”: “8000”