Saturday, 23 April 2016

Servers? Where Were Going We Dont Need Servers.






by Sarang Nagmote



Category - Cloud Computing
More Information & Updates Available at: http://vibranttechnologies.co.in




This article is featured in the new DZone Guide to Building and Deploying Applications on the Cloud, scheduled for release on Monday 4/25. Stay tuned for the 40pg pdf containing original research (based on survey data from over 700 developers and architects), articles by top engineers, a poster of the 12-factor app, and more.

"I dont know how to tell you this, but youre in a time machine."

Advancements in application architecture patterns all have a core purpose: developer empowerment. In today’s fast-paced and ultra-competitive world where every company has to be a software company, organizations are doing all they can to enable their developers to ship applications faster. With such high expectations, what could possibly be more promising than a serverless world?
Despite the implications, serverless computing doesn’t mean getting rid of the data center through some form of black magic that powers compute cycles in thin air. At its core, the concept promises a serverless experience for developers, who never have to think about provisioning or managing infrastructure resources to power workloads at any scale. This is done by decoupling backend jobs as independent microservices that run through an automated workflow when a predetermined event occurs. These events arise from a variety of sources—from an application such as a webhook, in the real world such as a sensor capture, within a system such as a database change, or on a schedule such as a cron job. The event then triggers the workflow behind the scenes—spin up, execute, tear down. Rinse and repeat at massive scale. This form of event-driven computing shifts configuration from the systems layer to the application layer, thus turning the DevOps mantra Infrastructure as Code into Infrastructure in Response to Code.

"Theres something very familiar about all this."

Cloud computing has come a long way in recent years. Virtualized infrastructure resources with elastic auto-scaling capabilities, platforms that abstract operations for delivery and management, along with a wide range of complementary toolkits, have all come together to provide developers with a highly effective environment for building large-scale distributed applications. With these continued advancements across the entire cloud stack, what makes this serverless trend any different?
The primary difference lies in the nature of the workloads. Simply put, an application behaves different than a job. When we push an application to the cloud, we do so thinking about where it will live and how it will run. This is because it has a known IP address and open port to accept incoming requests. On the other hand, when we build a job, we do so only thinking about when it will execute. In breaking away from the traditional request/response model towards an event-driven model, automated workflows react to dynamic environments accordingly. Even though there is compute involved, it’s completely outside of the development lifecycle, thus making the paradigm "serverless".
In order for such laissez-faire development to pass even the most basic smoke test, there must first be a discrete unit of compute that we’re confident is consistent from development to production. Container technologies such as Docker provide a lightweight runtime environment that isolates job processes with its dependencies, clearly specified through a standard packaging format. Compared to VMs which have a wider scope, containers provide only what is needed for each individual job, minimizing its footprint and ensuring its consistency. If we then follow the commonly accepted characteristics of microservices when writing our code—loosely coupled, stateless services that each perform a single responsibility—what we’re left with is a collection of independent and portable workloads that can be executed at will without the need for oversight. Fire away.

"What about all that talk about screwing up future events?"

While event-driven computing patterns have existed for some time, the serverless trend really caught on with the introduction of AWS Lambda in late 2014. Microsoft has a similar offer with WebJobs, and Google recently announced its version with Google Functions. For a solution independent of any sole infrastructure provider, Iron.io offers a container-based serverless computing platform that is available in any cloud, public or private.
Lest we forget: the more abstraction put forth, the more activity happening behind the scenes. To actually reap the benefits of a serverless architecture, one must fully grasp the software development lifecycle and underlying operations to avoid it becoming a hapless architecture. The following is an introduction to the process of building Docker-based serverless jobs along with some best practices to help get you started[1].

Building the Job

Developing with Docker is a breeze as you can work across multiple languages and environments without clutter or conflict. When your job is ready to build, you specify the runtime by writing a Dockerfile that sets the executable, dependencies, and any additional configuration needed for the process.
  • Choose a lightweight base layer. This can be a minimal Linux distribution such as Alpine or Busybox.
  • Keep the layers to a minimum. Don’t run an update on the OS and consolidate RUN operations inline using && when possible.
  • Limit the external dependencies to only what’s needed for the process itself, and vendor ahead of time so there’s no additional importing when the job is started.

Uploading the Job Image

Each serverless job is built as a Docker image and uploaded to a registry, where it can be pulled on demand. This can be a third party public image repository such as Docker Hub, Quay.io, or your own private registry.
  • Incorporate the job code into a CI/CD pipeline, building the container image and uploading to a repository.
  • Version your images using Docker Tags and document properly. Don’t rely on :latest as what should always run.

Setting Event Triggers

With such a potentially wide range of event sources, there can be a tendency for the associated jobs to pile up quickly. It is crucial to set the triggers properly to ensure the right workflows are kicked off and that no data is lost in the process.
  • Map each job to your API. At a minimum within your documentation, but you can also set endpoints for direct requests. Using an API Gateway is a common way to manage events and endpoints across systems.
  • Use a load balancer for synchronous requests and a message queue for asynchronous requests to throttle and buffer requests when load is high.

Configuring the Runtime Environment

Operational complexities such as service registration and container orchestration are abstracted away from the development lifecycle, however it is recommended to not just “set it and forget it” when running in a production environment.
  • Profile your workloads for their most optimal compute environment. For example, some workloads are more memory intensive and need more memory allocated.
  • Set how many concurrent jobs can execute at any given time. This can help keep costs down and ensure you don’t overload the system.
  • Determine what happens when the job fails. If you want to auto-retry, set the maximum number of times with a delay in between.

Securing and Monitoring the Job

To be production-grade, wrapping the environment with proper security and monitoring is essential. Given the levels of abstraction that serverless computing provides, it’s even more important to gain insight into what’s happening behind the scenes.
  • Payload data should be encrypted at the source and then decrypted within the job code itself. Public key is a common technique in this scenario.
  • Connections to databases from within a job process that are outside the network should be secure, either through a VPN or IP whitelisting.
  • Inspect stdout and stderr for each job process. You can pipe these logs to syslog or a 3rd party logging service.
  • Maintain a real-time dashboard of all queued, running, failed, and finished jobs.
Image title
End-to-end lifecycle of a serverless workload

"What happens in the future?"

With this serverless computing trend, the gap between the infrastructure layer and the application layer narrows even further. A well-orchestrated container fleet combined with a well-choreographed set of workloads leads to more intelligent systems across the board. The event-driven patterns set forth in this article provide developers and architects a way to respond to the ever-changing environments of the modern world, where everything from our bodies to the planet is connected. The next evolution in cloud computing will stem from these patterns to create predictive systems that can learn and adapt accordingly. A workload-aware global compute cloud that knows what, when, and where best to run workloads is the ultimate vision for developer empowerment in this modern cloud era. It won’t take a flux capacitor to get there, we’re already on our way as an ecosystem to enable this future.
[1]Services such as AWS Lambda, Azure WebJobs, and Google Functions have their own proprietary build, package, and runtime environment that can be followed through their documentation. This article is focused on Docker-based platforms such as that provided by Iron.io.

No comments:

Post a Comment