Interconnecting containers at scale with NGINX
Presented by Sarah Novotny, Technical Evangelist, NGINX Or, how NGINX can act as your stevedores properly routing and accelerating HTTP and TCP traffic to pods of containers across a globally distributed environment. NGINX can be used to manage and route your traffic across your distributed micro services architecture offering a seamless interface to your customers and giving you granular management of backend service scaling and versions. Add in some caching and load balancing and the efficiencies of an application delivery platform become apparent. Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. Docker containers can encapsulate any payload, and will run consistently on and between virtually any server. The same container that a developer builds and tests on a laptop will run at scale, in production*, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.