- Not My Circus
- Posts
- Azure Functions
Azure Functions
Every cloud provider offers a serverless facility which is popular with dev teams because they can upload their work without having to know...
Azure Functions
Every cloud provider offers a serverless facility which is popular with dev teams because they can upload their work without having to know too much about the environment and deployment process.
A while ago I investigated the usefulness of functions as a replacement for app services in a microservices architecture.
Azure functions initially suffered from slow cold starts which impacted their usefulness and performance during steep traffic spikes. I’ll see if this needs an update asap.
Cost
The true cost of serverless functions is notoriously difficult to estimate in advance. Following best practices is likely to reduce cost.
Comparison with app services
The comparison between function based pricing and ‘hosting’ based pricing is similar to comparing car travel with train travel. For one person it is cheaper to travel by car than it is to commandeer an entire train. At some point it becomes cheaper due to economies of scale to travel by train. Similarly, for one request, functions are much cheaper than app services. There is a break even point where an app service operating consistently near capacity may work out cheaper than functions.
In many cases functions will be cheaper right up until go live and then they will cross over the ‘million transaction free’ threshold and get consistent high traffic. The cost savings from functions come from not having to overprovision your infrastructure to cater for your peak load. If you’re consistently running at near peak then those savings should go away. Microsoft’s costs should be the same as functions are app services under the hood, so any further price differential is possibly due to marketing and price matching with aws.
The other thing to bear in mind is that by restricting the resource pool through using a ‘server’ you automatically throttle your response times as number of requests go up. This means as number of requests increases, responsiveness decreases and costs stays static. With functions everything increases as number of requests increases. Again, as with trains, as they approach or pass planned capacity the quality of service degrades automatically.
Reverse Conway’s law means that further splintering the system into nano-services / functions will make it even more difficult to support and maintain. It will take more co-ordinated effort to ensure consistency and knowledge sharing which will incur an indirect cost and add risk to the system.
Scaling and processing time
Functions on the Consumption-based pricing plan scale automatically based on (in most cases) HTTP requests and there is no way to control this — indeed this is the main advantage of using Functions. It does however mean that the cost of developing and running Functions-based services increases differently from traditional infrastructure. In particular:
Load and soak testing needs to consider the cost of number of requests.
Refactoring of code to move it to different services can have an effect on cost.
Retries and circuit breakers will have an effect on cost.
The role of each service will have an effect on scaling, which will have an effect on cost. This again suggests that different users groups and types — tenants, individuals, internal clients — may benefit from a separate function app so that they can scale for the correct qualitative reasons.
Running time per service also has an impact on cost. This means that:
Waiting for responses from services, and associated timeouts, will have an effect on cost
Efficiency of code will have an effect on cost.
Reply